Copy link to clipboard
Copied
I am trying to figure this out. Anything I try that uses s3:// prefix like readAsBinary() always ends up with "The AWS Access Key Id you provided does not exist in our records". Yet when I use downloadToFile() on the bucket object, there are no issues. Are there special perms to use that prefix? I am not the AWS admin.
CF2023.
So the file functions are no longer valid in CF2023 from what Adobe told me.
Me: So none of the file tags are usable anymore?
Yes, please use the new implementation going forward.
Thanks,
Vikram
yup, he figured it out. Thanks for the help. Any troubles with directoryCopy() to S3? I just seem to get nothing. No errors even.
Copy link to clipboard
Copied
I don't see an edit button....
Copy link to clipboard
Copied
What I believe is happening is you are mixing up the different methods CF provides of accessing S3.
Prior to ColdFusion introducing Multi-cloud services (in 2021, I believe?) there was still a way to easily access S3 with most CF file manipulation functions. And that was to use the prefix "s3://" on any operation of CFFILE or other file functions.
However, in order to use that prefix, one of the ways to get it to work was to include the AWS accesskey and Client Secret in the file path. So, your call would need to look something like:
s3://#yourAccessKey#:#yoursecretKey#@yourbucket/foldername/filename.txt
(I believe you may be able to declare these keys in the application file so as not to need them in the path, but I always had trouble getting that to work reliably)
In addition, you would then need to set the ACL permissions with the storeSetACL() function if you wished the file to be made available to others..
In regard to the downloadToFile() function. That is part of the new Multi-cloud functionality. The credentials and setup of that connection are likely already set up in your CFAdmin so you don't have to mess with the credentials in the code.
Here is the documentation for CF Cloud services.
Only functions listed in there will use your pre-configured cloud configs in CFAdmin.
All other file manipulation, using s3://, must include the access key and client secret.
Copy link to clipboard
Copied
Thanks for reply, unfortunately I have tried that as well, to no avail.
Copy link to clipboard
Copied
So the file functions are no longer valid in CF2023 from what Adobe told me.
Me: So none of the file tags are usable anymore?
Yes, please use the new implementation going forward.
Thanks,
Vikram
Copy link to clipboard
Copied
That's news to me. There's no mention of that anywhere I know of, though I could be wrong. But I do pay very close attention to such deprecation and removals.
Now, there WAS a bug where some of the old s3 approaches failed, and it was fixed by the updates in October: cf2023 update 5, and cf2021 update 11. John, can you clarify what update you're on?
More on that (fixed) bug here:
https://tracker.adobe.com/#/view/CF-4218035
Copy link to clipboard
Copied
Of course I was not on update 5 on dev. smh But then that was a terrible answer from support. I had to refactor because of that. I wil try the native functions now and see what I get. Thanks @Charlie Arehart
Copy link to clipboard
Copied
Well, I have tried the old way with all updates and still no joy.
Are others using this succesfully?
Copy link to clipboard
Copied
tried fileDelete() and cffile action=delete
Copy link to clipboard
Copied
We use those 'old' S3 approaches every day. We were also a victim of the bug that Charlie mentioned and have been transitioning over to the new functions. But, we have a lot of older code still using CF's built in FILE functions with the "s3" prefix and they are working just fine.
We are running CF2023, Update 6, build 330617
Wondering if the AWS keys you are using don't have permission to access the buckets you're going after?
Copy link to clipboard
Copied
Those same keys are the same ones used with the new cloud functions, and they work.
Are you able to get the list of permissions for your user so I can compare?
Copy link to clipboard
Copied
We have some processes made just for CF that we've given full control over certain buckets.
Copy link to clipboard
Copied
what do you have set here
Copy link to clipboard
Copied
Looks like same as you:
What about your ACL lists?
We have the CF keys with Read and Write access.
Copy link to clipboard
Copied
only bucket owner
Copy link to clipboard
Copied
we have an acl like this...
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::imgxxxxxx/*"
},
Copy link to clipboard
Copied
Are you able to use any of the CF file functions using the S3 prefix? Or, are there only certain functions, like fileDelete(), that are not working?
Wondering if you did not set the permissions, ussing the storeSetACL() function, on files that were uploaded?
Most of the time, after uploading a file, we immediately set permissions. For instance, if it's an image resource meant for display:
S3perms = [
{group="all", permission="read"},
{id="ownerid", permission="full_control"}
]
These perms allow our CF instance full control for later access.
Copy link to clipboard
Copied
It's all tags/functions. Except for the "new" way.
So i recreated the whole config in my personal aws account and it works fine. so our admin has something else set that is denying this. Trying to explain that.
Copy link to clipboard
Copied
It sounds like it's on your AWS admin at this point. Wish I could help further.
Copy link to clipboard
Copied
yup, he figured it out. Thanks for the help. Any troubles with directoryCopy() to S3? I just seem to get nothing. No errors even.
Copy link to clipboard
Copied
Glad you got it working! Can you change the 'correct answer' on this thread?
Can't say I've every used the function directoryCopy(). To be honest, didn't even know it existed in CF until just now!
Learn something new each day.