I have a server running vsftpd whose purpose is to provide limited FTP access to different paths on an Amazon S3 bucket. When mounting the bucket with s3fs, ([FuseOverAmazon]) the operating system thinks each directory contains itself and has an abnormally large size (16+ petabytes). As a result, FTP clients cannot list directories because the list operation returns an "invalid size" error. An open ticket on the s3fs Google Code page describes this problem in greater detail. See [here].
This is a simple project and should be a breeze for sys admins or developers with moderate Amazon AWS experience. We need our S3 buckets to be mounted on the server in a way that does not break vsftp or introduce performance or data integrity issues. Any method to achieve that goal is acceptable: modifying the current s3fs installation or command / leaving s3fs for a competing standard such as s3fs-c / conducting read/write ops to the bucket using Amazon's S3 API, etc. For security reasons I cannot provide direct access to the server but can provide any config files or log dumps needed and can execute commands for you either by correspondence or screen share as preferred.