Skip to main content
Participating Frequently
October 22, 2012
Question

Amazon EC2, S3, s3fs, f4fpackager, and limits

  • October 22, 2012
  • 1 reply
  • 1013 views

Hello,

I have run into some limitations on the AMS (FMS) EC2 AMIs and how they work with content on S3.

The bundled s3fs (S3 filesystem) on the Adobe AMIs does not properly read from S3. I have connected an S3 bucket to /hds-vod as per instructions, but the entire source file is read from S3 at each player request for dynamic segment/fragment. That is, there is no caching. It is unclear if it is s3fs, fuse or httpd that does not cache properly. And even if caching worked, how would the instance be able to cache many TB of data (for concurrent fragmentations)? It appears the entire "AMS AMI on AWS" solution is a bit of an impossibility to start with?

Potential solutions and why they fail.

  • Same problem on both FMS 4.5.1 and AMS 5.0.x AMIs
  • Serve content from local instance disk or EBS is not possible since we have many TB of data. The cost involving so would be prohibitive. EBS limit tops out at 1 TB, too.
  • Update instance (with "yum update")?   No, all updates to the old s3fs system on the Adobe AMI yield no difference. Content is never cached.

Any one have any good ideas? I really like the idea of AMS on EC2, with content in S3 and dynamic packaging. Can it be done?

Thanks,

Hans

This topic has been closed for replies.

1 reply

Adobe Employee
October 23, 2012

s3fs which is shipped with Adobe AMI has a limitation that it does not support files > 5GB.

This is a known limitation of this version of s3fs.

s3fs is an open-source module, available for download from, I think source-forge.

Newer versions s3fs do not have this limitation.

However simply updating s3fs will not help, since new version of s3fs it depends updates to a lot of underlying  IS components, which further requires upgrading to CentOS 6.x.

We acknowledge this issue and accept that we shall have to updare underlying OS of Adobe AMIs.But uptil the time that happens, customers will have to have their contents split  into files less than 5GB.


Participating Frequently
December 4, 2012

Thanks hparmar

I have patched the s3fs code to properly cache request byte range content from S3. No need to download the S3 file to disk. This works well in a cloudfront scenario where the apache module asks for bytes from the origin. Anyone interesed in the patch, contact me.

-Hans