Class | MemCache |
In: |
lib/memcache.rb
(CVS)
|
Parent: | Object |
A Ruby implementation of the ‘memcached’ client interface.
c_threshold | -> | compression_threshold |
c_threshold | [RW] | The compression threshold setting, in bytes. Values larger than this threshold will be compressed by #[]= (and set) and decompressed by #[] (and get). |
compression | [RW] | Turn compression on or off temporarily. |
debug | [RW] | Debugging flag — when set to true, debugging output will be send to $deferr. If set to an object which supports either #<< or call, debugging output will be sent to it via this method instead (call being preferred). If set to false or nil, no debugging will be generated. |
hashfunc | [RW] | The function (a Method or Proc object) which will be used to hash keys for determining where values are stored. |
mutex | [R] | The Sync mutex object for the cache |
namespace | [RW] | The namespace that will be prepended to all keys set/fetched from the cache. |
servers | [R] | The Array of MemCache::Server objects that represent the memcached instances the client will use. |
stats | [R] |
Hash of counts of cache operations, keyed by operation (e.g., +:delete+,
+:flush_all+, +:set+, +:add+, etc.).
Each value of the hash is another hash with statistics for the
corresponding operation:
{ :stime => <total system time of all calls>, :utime => <total user time> of all calls, :count => <number of calls>, } |
stats_callback | [RW] | Settable statistics callback — setting this to an object that responds to call will cause it to be called once for each operation with the operation type (as a Symbol), and Struct::Tms objects created immediately before and after the operation. |
times | [R] | Hash of system/user time-tuples for each op |
urlencode | [RW] | If this is true, all keys will be urlencoded before being sent to the cache. |
Create a new memcache object that will distribute gets and sets between the specified servers. You can also pass one or more options as hash arguments. Valid options are:
If a block is given, it is used as the default hash function for determining which server the key (given as an argument to the block) is stored/fetched from.
# File lib/memcache.rb, line 202 def initialize( *servers, &block ) opts = servers.pop if servers.last.is_a?( Hash ) opts = DefaultOptions.merge( opts || {} ) @debug = opts[:debug] @c_threshold = opts[:c_threshold] @compression = opts[:compression] @namespace = opts[:namespace] @readonly = opts[:readonly] @urlencode = opts[:urlencode] @timeout = opts[:connect_timeout] @buckets = nil @hashfunc = block || lambda {|val| val.hash} @mutex = Sync::new @reactor = IO::Reactor::new # Stats is an auto-vivifying hash -- an access to a key that hasn't yet # been created generates a new stats subhash @stats = Hash::new {|hsh,k| hsh[k] = {:count => 0, :utime => 0.0, :stime => 0.0} } @stats_callback = nil self.servers = servers end
Index assignment method. Supports slice-setting, e.g.:
cache[ :foo, :bar ] = 12, "darkwood"
This uses set_many internally if there is more than one key, or set if there is only one.
# File lib/memcache.rb, line 440 def []=( *args ) raise MemCacheError, "no active servers" unless self.active? raise MemCacheError, "readonly cache" if self.readonly? # Use #set if there's only one pair if args.length <= 2 self.set( *args ) else # Args from a slice-style call like # cache[ :foo, :bar ] = 1, 2 # will be passed in like: # ( :foo, :bar, [1, 2] ) # so just shift the value part off, transpose them into a Hash and # pass them on to #set_many. vals = args.pop vals = [vals] unless # Handle [:a,:b] = 1 vals.is_a?( Array ) && args.nitems > 1 pairs = {} [ args, vals ].transpose.each {|k,v| pairs[k] = v} self.set_many( pairs ) end # It doesn't matter what this returns, as Ruby ignores it for some # reason. return nil end
Returns true if there is at least one active server for the receiver.
# File lib/memcache.rb, line 348 def active? not @servers.empty? end
Like set, but only stores the tuple if it doesn’t already exist.
# File lib/memcache.rb, line 470 def add( key, val, exptime=0 ) raise MemCacheError, "no active servers" unless self.active? raise MemCacheError, "readonly cache" if self.readonly? @mutex.synchronize( Sync::EX ) { self.store( :add, key, val, exptime ) } end
Like incr, but decrements. Unlike incr, underflow is checked, and new values are capped at 0. If server value is 1, a decrement of 2 returns 0, not -1.
# File lib/memcache.rb, line 508 def decr( key, val=1 ) raise MemCacheError, "no active servers" unless self.active? raise MemCacheError, "readonly cache" if self.readonly? @mutex.synchronize( Sync::EX ) { self.incrdecr( :decr, key, val ) } end
Delete the entry with the specified key, optionally at the specified time.
# File lib/memcache.rb, line 520 def delete( key, time=nil ) raise MemCacheError, "no active servers" unless self.active? raise MemCacheError, "readonly cache" if self.readonly? svr = nil res = @mutex.synchronize( Sync::EX ) { svr = self.get_server( key ) cachekey = self.make_cache_key( key ) self.add_stat( :delete ) do cmd = "delete %s%s" % [ cachekey, time ? " #{time.to_i}" : "" ] self.send( svr => cmd ) end } res && res[svr].rstrip == "DELETED" end
Mark all entries on all servers as expired.
# File lib/memcache.rb, line 540 def flush_all raise MemCacheError, "no active servers" unless self.active? raise MemCacheError, "readonly cache" if self.readonly? res = @mutex.synchronize( Sync::EX ) { # Build commandset for servers that are alive servers = @servers.select {|svr| svr.alive? } cmds = self.make_command_map( "flush_all", servers ) # Send them in parallel self.add_stat( :flush_all ) { self.send( cmds ) } } !res.find {|svr,st| st.rstrip != 'OK'} end
Fetch and return the values associated with the given keys from the cache. Returns nil for any value that wasn’t in the cache.
# File lib/memcache.rb, line 355 def get( *keys ) raise MemCacheError, "no active servers" unless self.active? hash = nil @mutex.synchronize( Sync::SH ) { hash = self.fetch( :get, *keys ) } return *(hash.values_at( *keys )) end
Fetch and return the values associated the the given keys from the cache as a Hash object. Returns nil for any value that wasn’t in the cache.
# File lib/memcache.rb, line 371 def get_hash( *keys ) raise MemCacheError, "no active servers" unless self.active? return @mutex.synchronize( Sync::SH ) { self.fetch( :get_hash, *keys ) } end
Atomically increment the value associated with key by val. Returns nil if the value doesn’t exist in the cache, or the new value after incrementing if it does. val should be zero or greater. Overflow on the server is not checked. Beware of values approaching 2**32.
# File lib/memcache.rb, line 495 def incr( key, val=1 ) raise MemCacheError, "no active servers" unless self.active? raise MemCacheError, "readonly cache" if self.readonly? @mutex.synchronize( Sync::EX ) { self.incrdecr( :incr, key, val ) } end
Return a human-readable version of the cache object.
# File lib/memcache.rb, line 233 def inspect "<MemCache: %d servers/%s buckets: ns: %p, debug: %p, cmp: %p, ro: %p>" % [ @servers.nitems, @buckets.nil? ? "?" : @buckets.nitems, @namespace, @debug, @compression, @readonly, ] end
Returns true if the cache was created read-only.
# File lib/memcache.rb, line 304 def readonly? @readonly end
Like set, but only stores the tuple if it already exists.
# File lib/memcache.rb, line 481 def replace( key, val, exptime=0 ) raise MemCacheError, "no active servers" unless self.active? raise MemCacheError, "readonly cache" if self.readonly? @mutex.synchronize( Sync::EX ) { self.store( :replace, key, val, exptime ) } end
Return item stats from the specified servers
# File lib/memcache.rb, line 651 def server_item_stats( servers=@servers ) # Build commandset for servers that are alive asvrs = servers.select {|svr| svr.alive? } cmds = self.make_command_map( "stats items", asvrs ) # Send them in parallel return self.add_stat( :server_stats_items ) do self.send( cmds ) do |svr,reply| self.parse_stats( reply ) end end end
Return malloc stats from the specified servers (not supported on all platforms)
# File lib/memcache.rb, line 613 def server_malloc_stats( servers=@servers ) # Build commandset for servers that are alive asvrs = servers.select {|svr| svr.alive? } cmds = self.make_command_map( "stats malloc", asvrs ) # Send them in parallel return self.add_stat( :server_malloc_stats ) do self.send( cmds ) do |svr,reply| self.parse_stats( reply ) end end rescue MemCache::InternalError self.debug_msg( "One or more servers doesn't support 'stats malloc'" ) return {} end
Return memory maps from the specified servers (not supported on all platforms)
# File lib/memcache.rb, line 595 def server_map_stats( servers=@servers ) # Build commandset for servers that are alive asvrs = servers.select {|svr| svr.alive? } cmds = self.make_command_map( "stats maps", asvrs ) # Send them in parallel return self.add_stat( :server_map_stats ) do self.send( cmds ) end rescue MemCache::ServerError => err self.debug_msg "%p doesn't support 'stats maps'" % err.server return {} end
Reset statistics on the given servers.
# File lib/memcache.rb, line 578 def server_reset_stats( servers=@servers ) # Build commandset for servers that are alive asvrs = servers.select {|svr| svr.alive? } cmds = self.make_command_map( "stats reset", asvrs ) # Send them in parallel return self.add_stat( :server_reset_stats ) do self.send( cmds ) do |svr,reply| reply.rstrip == "RESET" end end end
Return item size stats from the specified servers
# File lib/memcache.rb, line 667 def server_size_stats( servers=@servers ) # Build commandset for servers that are alive asvrs = servers.select {|svr| svr.alive? } cmds = self.make_command_map( "stats sizes", asvrs ) # Send them in parallel return self.add_stat( :server_stats_sizes ) do self.send( cmds ) do |svr,reply| reply.sub( /#{CRLF}END#{CRLF}/, '' ).split( /#{CRLF}/ ) end end end
Return slab stats from the specified servers
# File lib/memcache.rb, line 632 def server_slab_stats( servers=@servers ) # Build commandset for servers that are alive asvrs = servers.select {|svr| svr.alive? } cmds = self.make_command_map( "stats slabs", asvrs ) # Send them in parallel return self.add_stat( :server_slab_stats ) do self.send( cmds ) do |svr,reply| ### :TODO: I could parse the results from this further to split ### out the individual slabs into their own sub-hashes, but this ### will work for now. self.parse_stats( reply ) end end end
Return a hash of statistics hashes for each of the specified servers.
# File lib/memcache.rb, line 562 def server_stats( servers=@servers ) # Build commandset for servers that are alive asvrs = servers.select {|svr| svr.alive?} cmds = self.make_command_map( "stats", asvrs ) # Send them in parallel return self.add_stat( :server_stats ) do self.send( cmds ) do |svr,reply| self.parse_stats( reply ) end end end
Set the servers the memcache will distribute gets and sets between. Arguments can be either Strings of the form "hostname:port" (or "hostname:port:weight"), or MemCache::Server objects.
# File lib/memcache.rb, line 313 def servers=( servers ) @mutex.synchronize( Sync::EX ) { @servers = servers.collect {|svr| self.debug_msg( "Transforming svr = %p", svr ) case svr when String host, port, weight = svr.split( /:/, 3 ) weight ||= DefaultServerWeight port ||= DefaultPort Server::new( host, port.to_i, weight, @timeout ) when Array host, port = svr[0].split(/:/, 2) weight = svr[1] || DefaultServerWeight port ||= DefaultPort Server::new( host, port.to_i, weight, @timeout ) when Server svr else raise TypeError, "cannot convert %s to MemCache::Server" % svr.class.name end } @buckets = nil } return @servers # (ignored) end
Unconditionally set the entry in the cache under the given key to value, returning true on success. The optional exptime argument specifies an expiration time for the tuple, in seconds relative to the present if it’s less than 60*60*24*30 (30 days), or as an absolute Unix time (E.g., Time#to_i) if greater. If exptime is +0+, the entry will never expire.
# File lib/memcache.rb, line 398 def set( key, val, exptime=0 ) raise MemCacheError, "no active servers" unless self.active? raise MemCacheError, "readonly cache" if self.readonly? rval = nil @mutex.synchronize( Sync::EX ) { rval = self.store( :set, key, val, exptime ) } return rval end
Multi-set method; unconditionally set each key/value pair in pairs. The call to set each value is done synchronously, but until memcached supports a multi-set operation this is only a little more efficient than calling set for each pair yourself.
# File lib/memcache.rb, line 415 def set_many( pairs ) raise MemCacheError, "no active servers" unless self.active? raise MemCacheError, "readonly cache" if self.readonly? raise MemCacheError, "expected an object that responds to the #each_pair message" unless pairs.respond_to?( :each_pair ) rvals = [] # Just iterate over the pairs, setting them one-by-one until memcached # supports multi-set. @mutex.synchronize( Sync::EX ) { pairs.each_pair do |key, val| rvals << self.store( :set, key, val, 0 ) end } return rvals end
Statistics wrapper: increment the execution count and processor times for the given operation type for the specified server.
# File lib/memcache.rb, line 967 def add_stat( type ) raise LocalJumpError, "no block given" unless block_given? # Time the block starttime = Process::times res = yield endtime = Process::times # Add time/call stats callback @stats[type][:count] += 1 @stats[type][:utime] += endtime.utime - starttime.utime @stats[type][:stime] += endtime.stime - starttime.stime @stats_callback.call( type, starttime, endtime ) if @stats_callback return res end
Write a message (formed sprintf-style with fmt and args) to the debugging callback in @debug, to $stderr if @debug doesn’t appear to be a callable object but is still true. If @debug is nil or false, do nothing.
# File lib/memcache.rb, line 989 def debug_msg( fmt, *args ) return unless @debug if @debug.respond_to?( :call ) @debug.call( fmt % args ) elsif @debug.respond_to?( :<< ) @debug << "#{fmt}\n" % args else $deferr.puts( fmt % args ) end end
Fetch the values corresponding to the given keys from the cache and return them as a Hash.
# File lib/memcache.rb, line 808 def fetch( type, *keys ) # Make a hash to hold servers => commands for the keys to be fetched, # and one to match cache keys to user keys. map = Hash::new {|hsh,key| hsh[key] = 'get'} cachekeys = {} res = {} self.add_stat( type ) { # Map the key's server to the command to fetch its value keys.each do |key| svr = self.get_server( key ) ckey = self.make_cache_key( key ) cachekeys[ ckey ] = key map[ svr ] << " " + ckey end # Send the commands and map the results hash into the return hash self.send( map, true ) do |svr, reply| # Iterate over the replies, stripping first the 'VALUE # <cachekey> <flags> <len>' line with a regexp and then the data # line by length as specified by the VALUE line. while reply.sub!( /^VALUE (\S+) (\d+) (\d+)\r\n/, '' ) ckey, flags, len = $1, $2.to_i, $3.to_i self.debug_msg( "Reply is: key=%p flags=%d len=%d", ckey, flags, len ) # Restore compressed and thawed values that require it. data = reply.slice!( 0, len + 2 ) # + CRLF rval = self.restore( data[0,len], flags ) self.debug_msg( "Restored: %p", rval ) res[ cachekeys[ckey] ] = rval end self.debug_msg( "Tail of reply is: %p", reply ) unless reply == "END" + CRLF raise MemCacheError, "Malformed reply fetched from %p: %p" % [ svr, rval ] end end } return res end
Get the server corresponding to the given key.
# File lib/memcache.rb, line 726 def get_server( key ) svr = nil @mutex.synchronize( Sync::SH ) { if @servers.length == 1 self.debug_msg( "Only one server: using %p", @servers.first ) svr = @servers.first else # If the key is an integer, it's assumed to be a precomputed hash # key so don't bother hashing it. Otherwise use the hashing function # to come up with a hash of the key to determine which server to # talk to hkey = nil if key.is_a?( Integer ) hkey = key else hkey = @hashfunc.call( key ) end # Set up buckets if they haven't been already unless @buckets @mutex.synchronize( Sync::EX ) { # Check again after switching to an exclusive lock unless @buckets @buckets = [] @servers.each do |svr| self.debug_msg( "Adding %d buckets for %p", svr.weight, svr ) svr.weight.times { @buckets.push(svr) } end end } end # Fetch a server for the given key, retrying if that server is # offline 20.times do |tries| svr = @buckets[ (hkey + tries) % @buckets.nitems ] break if svr.alive? self.debug_msg( "Skipping dead server %p", svr ) svr = nil end end } raise MemCacheError, "No servers available" if svr.nil? || !svr.alive? return svr end
Handle an IO event ev on the given sock for the specified server, expecting single-line syntax (i.e., ends with CRLF).
# File lib/memcache.rb, line 1094 def handle_line_io( sock, ev, server, buffers, multiline=false ) self.debug_msg( "Line IO (ml=%p) event for %p: %s: %p - %p", multiline, sock, ev, server, buffers ) # Set the terminator pattern based on whether multiline is turned on or # not. terminator = multiline ? MULTILINE_TERMINATOR : LINE_TERMINATOR # Handle the event case ev when :read len = buffers[:rbuf].length buffers[:rbuf] << sock.sysread( 256 ) self.debug_msg "Read %d bytes." % [ buffers[:rbuf].length - len ] # If there's an error, then we're done with this socket. Likewise # if we've read the whole reply. if ANY_ERROR.match( buffers[:rbuf][0..MAX_ERROR_LENGTH] ) || terminator.match( buffers[:rbuf] ) self.debug_msg "Done with read for %p: %p", sock, buffers[:rbuf] @reactor.remove( sock ) end when :write res = sock.send( buffers[:wbuf], SendFlags ) self.debug_msg( "Wrote %d bytes.", res ) buffers[:wbuf].slice!( 0, res ) unless res.zero? # If the write buffer's done, then we don't care about writability # anymore, so clear that event. if buffers[:wbuf].empty? self.debug_msg "Done with write for %p" % sock @reactor.disableEvents( sock, :write ) end when :err so_error = sock.getsockopt( SOL_SOCKET, SO_ERROR ) self.debug_msg "Socket error on %p: %s" % [ sock, so_error ] @reactor.remove( sock ) server.mark_dead( so_error ) else raise ArgumentError, "Unhandled reactor event type: #{ev}" end rescue EOFError, IOError => err @reactor.remove( sock ) server.mark_dead( err.message ) end
Handle error messages defined in the memcached protocol. The buffer argument will be parsed for the error type, and, if appropriate, the error message. The server argument is only used in the case of SERVER_ERROR, in which case the raised exception will contain that object. The depth argument is used to specify the call depth from which the exception’s stacktrace should be gathered.
# File lib/memcache.rb, line 1150 def handle_protocol_error( buffer, server, depth=4 ) case buffer when CLIENT_ERROR raise ClientError, $1, caller(depth) when SERVER_ERROR raise ServerError::new( server ), $1, caller(depth) else raise InternalError, "Unknown internal error", caller(depth) end end
Increment/decrement the value associated with key on the server by val.
# File lib/memcache.rb, line 860 def incrdecr( type, key, val ) svr = self.get_server( key ) cachekey = self.make_cache_key( key ) # Form the command, send it, and read the reply res = self.add_stat( type ) { cmd = "%s %s %d" % [ type, cachekey, val ] self.send( svr => cmd ) } # De-stringify the number if it is one and return it as an Integer, or # nil if it isn't a number. if /^(\d+)/.match( res[svr] ) return Integer( $1 ) else return nil end end
Create a key for the cache from any object. Strings are used as-is, Symbols are stringified, and other values use their hash method.
# File lib/memcache.rb, line 1004 def make_cache_key( key ) ck = @namespace ? "#@namespace:" : "" case key when String, Symbol ck += key.to_s else ck += "%s" % key.hash end ck = uri_escape( ck ) unless !@urlencode self.debug_msg( "Cache key for %p: %p", key, ck ) return ck end
Create a hash mapping the specified command to each of the given servers.
# File lib/memcache.rb, line 689 def make_command_map( command, servers=@servers ) Hash[ *([servers, [command]*servers.nitems].transpose.flatten) ] end
Parse raw statistics lines from a memcached ‘stats’ reply and return a Hash.
# File lib/memcache.rb, line 696 def parse_stats( reply ) # Trim off the footer self.debug_msg "Parsing stats reply: %p" % [reply] reply.sub!( /#{CRLF}END#{CRLF}/, '' ) # Make a hash out of the other values pairs = reply.split( /#{CRLF}/ ).collect {|line| stat, name, val = line.split(/\s+/, 3) name = name.to_sym self.debug_msg "Converting %s stat: %p" % [name, val] if StatConverters.key?( name ) self.debug_msg "Using %s converter: %p" % [ name, StatConverters[name] ] val = StatConverters[ name ].call( val ) else self.debug_msg "Using default converter" val = StatConverters[ :__default__ ].call( val ) end self.debug_msg "... converted to: %p (%s)" % [ val, val.class.name ] [name,val] } return Hash[ *(pairs.flatten) ] end
Prepare the specified value val for insertion into the cache, serializing and compressing as necessary/configured.
# File lib/memcache.rb, line 882 def prep_value( val ) sval = nil flags = 0 # Serialize if something other than a String, Numeric case val when String sval = val.dup when Numeric sval = val.to_s flags |= F_NUMERIC else self.debug_msg( "Serializing %p", val ) sval = Marshal::dump( val ) flags |= F_SERIALIZED end # Compress if compression is enabled, the value exceeds the # compression threshold, and the compressed value is smaller than # the uncompressed version. if @compression && sval.length > @c_threshold zipped = Zlib::Deflate::deflate( sval, Zlib::BEST_SPEED ) if zipped.length < (sval.length * MinCompressionRatio) self.debug_msg "Using compressed value (%d/%d)" % [ zipped.length, sval.length ] sval = zipped flags |= F_COMPRESSED end end # Urlencode unless told not to unless !@urlencode sval = uri_escape( sval ) flags |= F_ESCAPED end return sval, flags end
Restore the specified value val from the form inserted into the cache, given the specified flags.
# File lib/memcache.rb, line 934 def restore( val, flags=0 ) self.debug_msg( "Restoring value %p (flags: %d)", val, flags ) rval = val.dup # De-urlencode if (flags & F_ESCAPED).nonzero? rval = URI::unescape( rval ) end # Decompress if (flags & F_COMPRESSED).nonzero? rval = Zlib::Inflate::inflate( rval ) end # Unserialize if (flags & F_SERIALIZED).nonzero? rval = Marshal::load( rval ) end if (flags & F_NUMERIC).nonzero? if /\./.match( rval ) rval = Float( rval ) else rval = Integer( rval ) end end return rval end
Given pairs of MemCache::Server objects and Strings or Arrays of commands for each server, do multiplexed IO between all of them, reading single-line responses.
# File lib/memcache.rb, line 1026 def send( pairs, multiline=false ) self.debug_msg "Send for %d pairs: %p", pairs.length, pairs raise TypeError, "type mismatch: #{pairs.class.name} given" unless pairs.is_a?( Hash ) buffers = {} rval = {} # Fetch the Method object for the IO handler handler = self.method( :handle_line_io ) # Set up the buffers and reactor for the exchange pairs.each do |server,cmds| unless server.alive? rval[server] = nil pairs.delete( server ) next end # Handle either Arrayish or Stringish commandsets wbuf = cmds.respond_to?( :join ) ? cmds.join( CRLF ) : cmds.to_s self.debug_msg( "Created command %p for %p", wbuf, server ) wbuf += CRLF # Make a buffer tuple (read/write) for the server buffers[server] = { :rbuf => '', :wbuf => wbuf } # Register the server's socket with the reactor @reactor.register( server.socket, :write, :read, server, buffers[server], multiline, &handler ) end # Do all the IO at once self.debug_msg( "Reactor starting for %d IOs", @reactor.handles.length ) @reactor.poll until @reactor.empty? self.debug_msg( "Reactor finished." ) # Build the return value, delegating the processing to a block if one # was given. pairs.each {|server,cmds| # Handle protocol errors if they happen. I have no idea if this is # desirable/correct behavior: none of the other clients react to # CLIENT_ERROR or SERVER_ERROR at all; in fact, I think they'd all # hang on one like this one did before I added them to the # terminator pattern in #handle_line_io. So this may change in the # future if it ends up being better to just ignore errors, try to # cache/fetch what we can, and hope returning nil will suffice in # the face of error conditions self.handle_protocol_error( buffers[server][:rbuf], server ) if ANY_ERROR.match( buffers[server][:rbuf] ) # If the caller is doing processing on the reply, yield each buffer # in turn. Otherwise, just use the raw buffer as the return value if block_given? self.debug_msg( "Yielding value/s %p for %p", buffers[server][:rbuf], server ) rval[server] = yield( server, buffers[server][:rbuf] ) else rval[server] = buffers[server][:rbuf] end } return rval end
Store the specified value to the cache associated with the specified key and expiration time exptime.
# File lib/memcache.rb, line 780 def store( type, key, val, exptime ) return self.delete( key ) if val.nil? svr = self.get_server( key ) cachekey = self.make_cache_key( key ) res = nil self.add_stat( type ) { # Prep the value for storage sval, flags = self.prep_value( val ) # Form the command cmd = [] cmd << "%s %s %d %d %d" % [ type, cachekey, flags, exptime, sval.length ] cmd << sval self.debug_msg( "Storing with: %p", cmd ) # Send the command and read the reply res = self.send( svr => cmd ) } # Check for an appropriate server response return (res && res[svr] && res[svr].rstrip == "STORED") end