Class | AWS::S3::S3Object |
In: |
lib/aws/s3/object.rb
|
Parent: | Base |
S3Objects represent the data you store on S3. They have a key (their name) and a value (their data). All objects belong to a bucket.
You can store an object on S3 by specifying a key, its data and the name of the bucket you want to put it in:
S3Object.store('me.jpg', open('headshot.jpg'), 'photos')
The content type of the object will be inferred by its extension. If the appropriate content type can not be inferred, S3 defaults to binary/octet-stream.
If you want to override this, you can explicitly indicate what content type the object should have with the :content_type option:
file = 'black-flowers.m4a' S3Object.store( file, open(file), 'jukebox', :content_type => 'audio/mp4a-latm' )
You can read more about storing files on S3 in the documentation for S3Object.store.
If you just want to fetch an object you‘ve stored on S3, you just specify its name and its bucket:
picture = S3Object.find 'headshot.jpg', 'photos'
N.B. The actual data for the file is not downloaded in both the example where the file appeared in the bucket and when fetched directly. You get the data for the file like this:
picture.value
You can fetch just the object‘s data directly:
S3Object.value 'headshot.jpg', 'photos'
Or stream it by passing a block to stream:
open('song.mp3', 'w') do |file| S3Object.stream('song.mp3', 'jukebox') do |chunk| file.write chunk end end
The data of the file, once download, is cached, so subsequent calls to value won‘t redownload the file unless you tell the object to reload its value:
# Redownloads the file's data song.value(:reload)
Other functionality includes:
# Check if an object exists? S3Object.exists? 'headshot.jpg', 'photos' # Copying an object S3Object.copy 'headshot.jpg', 'headshot2.jpg', 'photos' # Renaming an object S3Object.rename 'headshot.jpg', 'portrait.jpg', 'photos' # Deleting an object S3Object.delete 'headshot.jpg', 'photos'
You can find out the content type of your object with the content_type method:
song.content_type # => "audio/mpeg"
You can change the content type as well if you like:
song.content_type = 'application/pdf' song.store
(Keep in mind that due to limitiations in S3‘s exposed API, the only way to change things like the content_type is to PUT the object onto S3 again. In the case of large files, this will result in fully re-uploading the file.)
A bevie of information about an object can be had using the about method:
pp song.about {"last-modified" => "Sat, 28 Oct 2006 21:29:26 GMT", "content-type" => "binary/octet-stream", "etag" => "\"dc629038ffc674bee6f62eb64ff3a\"", "date" => "Sat, 28 Oct 2006 21:30:41 GMT", "x-amz-request-id" => "B7BC68F55495B1C8", "server" => "AmazonS3", "content-length" => "3418766"}
You can get and set metadata for an object:
song.metadata # => {} song.metadata[:album] = "A River Ain't Too Much To Love" # => "A River Ain't Too Much To Love" song.metadata[:released] = 2005 pp song.metadata {"x-amz-meta-released" => 2005, "x-amz-meta-album" => "A River Ain't Too Much To Love"} song.store
That metadata will be saved in S3 and is hence forth available from that object:
song = S3Object.find('black-flowers.mp3', 'jukebox') pp song.metadata {"x-amz-meta-released" => "2005", "x-amz-meta-album" => "A River Ain't Too Much To Love"} song.metadata[:released] # => "2005" song.metadata[:released] = 2006 pp song.metadata {"x-amz-meta-released" => 2006, "x-amz-meta-album" => "A River Ain't Too Much To Love"}
store | -> | create |
store | -> | save |
Fetch information about the object with key from bucket. Information includes content type, content length, last modified time, and others.
If the specified key does not exist, NoSuchKey is raised.
# File lib/aws/s3/object.rb, line 202 202: def about(key, bucket = nil, options = {}) 203: response = head(path!(bucket, key, options), options) 204: raise NoSuchKey.new("No such key `#{key}'", bucket) if response.code == 404 205: About.new(response.headers) 206: end
Makes a copy of the object with key to copy_key, preserving the ACL of the existing object if the :copy_acl option is true (default false).
# File lib/aws/s3/object.rb, line 182 182: def copy(key, copy_key, bucket = nil, options = {}) 183: bucket = bucket_name(bucket) 184: source_key = path!(bucket, key) 185: default_options = {'x-amz-copy-source' => source_key} 186: target_key = path!(bucket, copy_key) 187: returning put(target_key, default_options) do 188: acl(copy_key, bucket, acl(key, bucket)) if options[:copy_acl] 189: end 190: end
Delete object with key from bucket.
# File lib/aws/s3/object.rb, line 220 220: def delete(key, bucket = nil, options = {}) 221: # A bit confusing. Calling super actually makes an HTTP DELETE request. The delete method is 222: # defined in the Base class. It happens to have the same name. 223: super(path!(bucket, key, options), options).success? 224: end
Returns the object whose key is name in the specified bucket. If the specified key does not exist, a NoSuchKey exception will be raised.
# File lib/aws/s3/object.rb, line 145 145: def find(key, bucket = nil) 146: # N.B. This is arguably a hack. From what the current S3 API exposes, when you retrieve a bucket, it 147: # provides a listing of all the files in that bucket (assuming you haven't limited the scope of what it returns). 148: # Each file in the listing contains information about that file. It is from this information that an S3Object is built. 149: # 150: # If you know the specific file that you want, S3 allows you to make a get request for that specific file and it returns 151: # the value of that file in its response body. This response body is used to build an S3Object::Value object. 152: # If you want information about that file, you can make a head request and the headers of the response will contain 153: # information about that file. There is no way, though, to say, give me the representation of just this given file the same 154: # way that it would appear in a bucket listing. 155: # 156: # When fetching a bucket, you can provide options which narrow the scope of what files should be returned in that listing. 157: # Of those options, one is <tt>marker</tt> which is a string and instructs the bucket to return only object's who's key comes after 158: # the specified marker according to alphabetic order. Another option is <tt>max-keys</tt> which defaults to 1000 but allows you 159: # to dictate how many objects should be returned in the listing. With a combination of <tt>marker</tt> and <tt>max-keys</tt> you can 160: # *almost* specify exactly which file you'd like it to return, but <tt>marker</tt> is not inclusive. In other words, if there is a bucket 161: # which contains three objects who's keys are respectively 'a', 'b' and 'c', then fetching a bucket listing with marker set to 'b' will only 162: # return 'c', not 'b'. 163: # 164: # Given all that, my hack to fetch a bucket with only one specific file, is to set the marker to the result of calling String#previous on 165: # the desired object's key, which functionally makes the key ordered one degree higher than the desired object key according to 166: # alphabetic ordering. This is a hack, but it should work around 99% of the time. I can't think of a scenario where it would return 167: # something incorrect. 168: 169: # We need to ensure the key doesn't have extended characters but not uri escape it before doing the lookup and comparing since if the object exists, 170: # the key on S3 will have been normalized 171: key = key.remove_extended unless key.valid_utf8? 172: bucket = Bucket.find(bucket_name(bucket), :marker => key.previous, :max_keys => 1) 173: # If our heuristic failed, trigger a NoSuchKey exception 174: if (object = bucket.objects.first) && object.key == key 175: object 176: else 177: raise NoSuchKey.new("No such key `#{key}'", bucket) 178: end 179: end
When storing an object on the S3 servers using S3Object.store, the data argument can be a string or an I/O stream. If data is an I/O stream it will be read in segments and written to the socket incrementally. This approach may be desirable for very large files so they are not read into memory all at once.
# Non streamed upload S3Object.store('greeting.txt', 'hello world!', 'marcel') # Streamed upload S3Object.store('roots.mpeg', open('roots.mpeg'), 'marcel')
# File lib/aws/s3/object.rb, line 235 235: def store(key, data, bucket = nil, options = {}) 236: validate_key!(key) 237: # Must build path before infering content type in case bucket is being used for options 238: path = path!(bucket, key, options) 239: infer_content_type!(key, options) 240: 241: put(path, options, data) # Don't call .success? on response. We want to get the etag. 242: end
# File lib/aws/s3/object.rb, line 137 137: def stream(key, bucket = nil, options = {}, &block) 138: value(key, bucket, options) do |response| 139: response.read_body(&block) 140: end 141: end
All private objects are accessible via an authenticated GET request to the S3 servers. You can generate an authenticated url for an object like this:
S3Object.url_for('beluga_baby.jpg', 'marcel_molina')
By default authenticated urls expire 5 minutes after they were generated.
Expiration options can be specified either with an absolute time since the epoch with the :expires options, or with a number of seconds relative to now with the :expires_in options:
# Absolute expiration date # (Expires January 18th, 2038) doomsday = Time.mktime(2038, 1, 18).to_i S3Object.url_for('beluga_baby.jpg', 'marcel', :expires => doomsday) # Expiration relative to now specified in seconds # (Expires in 3 hours) S3Object.url_for('beluga_baby.jpg', 'marcel', :expires_in => 60 * 60 * 3)
You can specify whether the url should go over SSL with the :use_ssl option:
# Url will use https protocol S3Object.url_for('beluga_baby.jpg', 'marcel', :use_ssl => true)
By default, the ssl settings for the current connection will be used.
If you have an object handy, you can use its url method with the same objects:
song.url(:expires_in => 30)
To get an unauthenticated url for the object, such as in the case when the object is publicly readable, pass the :authenticated option with a value of false.
S3Object.url_for('beluga_baby.jpg', 'marcel', :authenticated => false) # => http://s3.amazonaws.com/marcel/beluga_baby.jpg
# File lib/aws/s3/object.rb, line 290 290: def url_for(name, bucket = nil, options = {}) 291: connection.url_for(path!(bucket, name, options), options) # Do not normalize options 292: end
Returns the value of the object with key in the specified bucket.
# File lib/aws/s3/object.rb, line 133 133: def value(key, bucket = nil, options = {}, &block) 134: Value.new(get(path!(bucket, key, options), options, &block)) 135: end
Interface to information about the current object. Information is read only, though some of its data can be modified through specific methods, such as content_type and content_type=.
pp some_object.about {"last-modified" => "Sat, 28 Oct 2006 21:29:26 GMT", "x-amz-id-2" => "LdcQRk5qLwxJQiZ8OH50HhoyKuqyWoJ67B6i+rOE5MxpjJTWh1kCkL+I0NQzbVQn", "content-type" => "binary/octet-stream", "etag" => "\"dc629038ffc674bee6f62eb68454ff3a\"", "date" => "Sat, 28 Oct 2006 21:30:41 GMT", "x-amz-request-id" => "B7BC68F55495B1C8", "server" => "AmazonS3", "content-length" => "3418766"} some_object.content_type # => "binary/octet-stream" some_object.content_type = 'audio/mpeg' some_object.content_type # => 'audio/mpeg' some_object.store
# File lib/aws/s3/object.rb, line 512 512: def about 513: stored? ? self.class.about(key, bucket.name) : About.new 514: end
The current object‘s bucket. If no bucket has been set, a NoBucketSpecified exception will be raised. For cases where you are not sure if the bucket has been set, you can use the belongs_to_bucket? method.
# File lib/aws/s3/object.rb, line 429 429: def bucket 430: @bucket or raise NoBucketSpecified 431: end
Copies the current object, given it the name copy_name. Keep in mind that due to limitations in S3‘s API, this operation requires retransmitting the entire object to S3.
# File lib/aws/s3/object.rb, line 552 552: def copy(copy_name, options = {}) 553: self.class.copy(key, copy_name, bucket.name, options) 554: end
# File lib/aws/s3/object.rb, line 562 562: def etag(reload = false) 563: return nil unless stored? 564: expirable_memoize(reload) do 565: reload ? about(reload)['etag'][1...-1] : attributes['e_tag'][1...-1] 566: end 567: end
Returns the key of the object. If the key is not set, a NoKeySpecified exception will be raised. For cases where you are not sure if the key has been set, you can use the key_set? method. Objects must have a key set to be saved onto S3. Objects which have already been saved onto S3 will always have their key set.
# File lib/aws/s3/object.rb, line 449 449: def key 450: attributes['key'] or raise NoKeySpecified 451: end
Returns true if the current object has had its key set yet. Objects which have already been saved will always return true. This method is useful for objects which have not been saved yet so you know if you need to set the object‘s key since you can not save an object unless its key has been set.
object.store if object.key_set? && object.belongs_to_bucket?
# File lib/aws/s3/object.rb, line 463 463: def key_set? 464: !attributes['key'].nil? 465: end
Interface to viewing and editing metadata for the current object. To be treated like a Hash.
some_object.metadata # => {} some_object.metadata[:author] = 'Dave Thomas' some_object.metadata # => {"x-amz-meta-author" => "Dave Thomas"} some_object.metadata[:author] # => "Dave Thomas"
# File lib/aws/s3/object.rb, line 526 526: def metadata 527: about.metadata 528: end
Saves the current object with the specified options. Valid options are listed in the documentation for S3Object::store.
# File lib/aws/s3/object.rb, line 532 532: def store(options = {}) 533: raise DeletedObject if frozen? 534: options = about.to_headers.merge(options) if stored? 535: response = self.class.store(key, value, bucket.name, options) 536: bucket.update(:stored, self) 537: response.success? 538: end
Generates an authenticated url for the current object. Accepts the same options as its class method counter part S3Object.url_for.
# File lib/aws/s3/object.rb, line 577 577: def url(options = {}) 578: self.class.url_for(key, bucket.name, options) 579: end
Lazily loads object data.
Force a reload of the data by passing :reload.
object.value(:reload)
When loading the data for the first time you can optionally yield to a block which will allow you to stream the data in segments.
object.value do |segment| send_data segment end
The full list of options are listed in the documentation for its class method counter part, S3Object::value.
# File lib/aws/s3/object.rb, line 481 481: def value(options = {}, &block) 482: if options.is_a?(Hash) 483: reload = !options.empty? 484: else 485: reload = options 486: options = {} 487: end 488: expirable_memoize(reload) do 489: self.class.stream(key, bucket.name, options, &block) 490: end 491: end