For those of you who use Laravel, interacting with files is extremely simple thanks to the Storage Facade. Everything is abstracted away very nicely. Today, I needed to change the upload flow for the application I was building. Until now it was using the Storage Facade and everything was working nicely. The client uploads the files to the server and the server handles the upload to S3.
The issue with this approach is apparent when working with very large files. There’s no reason to hit the server with every file upload only to forward it to S3. Enter presigned URLs.
A presigned URL gives you access to the object identified in the URL, provided that the creator of the presigned URL has permissions to access that object. That is, if you receive a presigned URL to upload an object, you can upload the object only if the creator of the presigned URL has the necessary permissions to upload that object.
All objects and buckets by default are private. The presigned URLs are useful if you want your user/customer to be able to upload a specific object to your bucket, but you don’t require them to have AWS security credentials or permissions. When you create a presigned URL, you must provide your security credentials and then specify a bucket name, an object key, an HTTP method (PUT for uploading objects), and an expiration date and time. The presigned URLs are valid only for the specified duration. That is, you must start the action before the expiration date and time. If the action consists of multiple steps, such as a multipart upload, all steps must be started before the expiration, otherwise you will receive an error when Amazon S3 attempts to start a step with an expired URL.
In short, what that means is that you can create a special URL that contains all the security credentials that are necessary to retrieve/upload the object.
Something to be aware of (after a few hours of head bashing) is that the
Storage::temporaryUrl() method only works for GETTING the file. It does not work for PUTTING a file.
After digging around a little bit in the Laravel source code, I found the reason for this. Internally, if you are using the S3 file driver, then calling
Storage::temporaryUrl() internally calls
getAwsTemporaryUrl() which creates a signed URL for a GetObject command. So what we need to do is mimic the
getAwsTemporaryUrl() function and instead use the PutObject command. As follows:
$adapter = Storage::getAdapter(); // Get the filesystem adapter
$client = $adapter->getClient(); // Get the aws client
$bucket = $adapter->getBucket(); // Get the current bucket// Make a PutObject command
$cmd = $client->getCommand('PutObject', [
'Bucket' => $bucket,
'Key' => 'ItWorks',
'ACL' => 'public-read' // Explained later
]);// Get the presigned request
$request = $client->createPresignedRequest($cmd, '+20 minutes');// Get the actual URL to make the request to
$presignedUrl = (string)$request->getUri();
You can then make a PUT request to the generated
Something to note is that if you add the
'ACL' => 'public-read' , you’ll have to add a header to the PUT request
x-amz-acl:public-read otherwise you’ll get a signature mismatch error.
I hope that this helps someone out there.