class MediaWiktory::Wikipedia::Response
Thin wrapper around MediaWiki API response.
It provides services for separating metadata of response of its essential data, continuing multi- page responses, and converting response errors into exceptions.
You should not instantiate this class, it is obtained by some {Actions Action}'s {Actions::Base#response response}.
Constants
- Error
Response
fail was returned by target MediaWiki API.- METADATA_KEYS
@private
Attributes
Metadata part of the response, keys like “error”, “warnings”, “continue”.
See {#to_h} for content part of the response and {#raw} for entire response.
@return [Hash]
Entire response “as is”, including contents and metadata parts.
See {#to_h} for content part of the response and {#metadata} for metadata part.
@return [Hash]
Public Class Methods
@private
# File lib/mediawiktory/wikipedia/response.rb, line 39 def initialize(action, response_hash) @action = action @raw = response_hash.freeze @metadata, @data = response_hash.partition { |key, _| METADATA_KEYS.include?(key) }.map(&:to_h).map(&:freeze) error! if @metadata['error'] end
@private
# File lib/mediawiktory/wikipedia/response.rb, line 20 def self.parse(action, response_body) new(action, JSON.parse(response_body)) end
Public Instance Methods
Fetches a key from response content.
@param key [String] Key name.
# File lib/mediawiktory/wikipedia/response.rb, line 58 def [](key) to_h[key] end
Continues current request and returns current & next pages merged. (Merging is necessary because MediaWiki tends to return the same object's data continued on the next request page.)
@return [Response]
# File lib/mediawiktory/wikipedia/response.rb, line 78 def continue fail 'This is the last page' unless continue? action = @action.merge(@metadata.fetch('continue')) self.class.new(action, merge_responses(JSON.parse(action.perform))) end
Returns `true` if there is next pages of response. See also {#continue}
# File lib/mediawiktory/wikipedia/response.rb, line 70 def continue? @metadata.key?('continue') end
Digs for a keys from response content.
@param keys [Array<String>] Key names.
# File lib/mediawiktory/wikipedia/response.rb, line 65 def dig(*keys) hash_dig(to_h, *keys) end
@return [String]
# File lib/mediawiktory/wikipedia/response.rb, line 87 def inspect "#<#{self.class.name}(#{@action.name}): #{to_h.keys.join(', ')}#{' (can continue)' if continue?}>" end
“Content” part of the response as a plain Ruby Hash.
@return [Hash]
# File lib/mediawiktory/wikipedia/response.rb, line 49 def to_h # For most of actions, like query, all response is inside additional "query" key, # ...but not for all. @data.key?(@action.name) ? @data.fetch(@action.name) : @data end
Private Instance Methods
# File lib/mediawiktory/wikipedia/response.rb, line 113 def error! fail Error, hash_dig(@metadata, 'error', 'info') end
TODO: replace with Hash#dig when minimal Ruby version would be 2.3
# File lib/mediawiktory/wikipedia/response.rb, line 118 def hash_dig(hash, *keys) keys.inject(hash) { |res, key| res[key] or return nil } end
# File lib/mediawiktory/wikipedia/response.rb, line 93 def merge_responses(new_response) merger = lambda do |_k, v1, v2| if v1.is_a?(Hash) && v2.is_a?(Hash) v1.merge(v2, &merger) elsif v1.is_a?(Array) && v2.is_a?(Array) v1 + v2 else v2 end end # Newest page is responsible for all metadata, so we take entire new response and only data # part of old one. # # Deep recursive merge is necessary because MediaWiki could split response in parts unpredicatably # (like ['query']['pages'][some_page_id] can be present on several pages, providing different # parts of a page). @data.merge(new_response, &merger) end