Path: | doc/release_notes/4.9.0.txt |
Last Update: | Thu Nov 12 08:45:04 +0000 2015 |
Using this optimization framework, generating the SQL for query is about 3x faster, and since SQL generation time is a significant portion of total time for simple queries, simple queries can execute up to 50% faster.
There are two APIs for this optimization framework. There is a lower level dataset API:
loader = Sequel::Dataset::PlaceholderLiteralizer. loader(DB[:items]) do |pl, ds| ds.where(:id=>pl.arg).exclude(:name=>pl.arg).limit(1) end loader.first(1, "foo") # SELECT * FROM items WHERE ((id = 1) AND (name != 'foo')) LIMIT 1 loader.first([1, 2], %w"foo bar") # SELECT * FROM items WHERE ((id IN (1, 2)) AND # (name NOT IN ('foo', 'bar'))) LIMIT 1
There is also a higher level model API (Model.finder):
class Item < Sequel::Model # Given class method that returns a dataset def self.by_id_and_not_name(id, not_name) where(:id=>id).exclude(:name=>not_name) end # Create optimized method that returns first value finder :by_id_and_not_name end # Call optimized method Album.first_by_id_and_not_name(1, 'foo') # SELECT * FROM items WHERE ((id = 1) AND (name != 'foo')) LIMIT 1
Model.finder defaults to creating a method that returns the first matching row, but using the :type option you can create methods that call each, all, or get. There is also an option to choose the method name (:name), as well as one to specify the number of arguments to use if the method doesn‘t take a fixed number (:arity).
Finally, Model.find, .first, and .first! now automatically use an optimized finder if given a single argument. Model.[] uses an optimized finder if given a single hash, and Model.[], .with_pk, and .with_pk! use an optimized finder if the model has a composite primary key. In all of these cases, these methods are about 50% faster than before.
Unfortunately, there are enough corner cases to this approach that it cannot be used by default. At the least, the dataset needs to be selecting the columns it is ordering by, not aliasing the columns it is ordering by in the SELECT clause, not have NULLs in any of the columns being ordered by, and not itself use a limit or offset.
If you are ordering by expressions that are not simple column values, you can provide a :filter_value option proc that takes the last retrieved row and array of order by expressions, and returns an array of values in the last retrieved row for those order by expressions.
ds = DB[:huge_table] ds.use_cursor(:rows_per_fetch=>1).each do |row| ds.where_current_of.update(:column=>ruby_method(row)) end
Note that if you use disable_insert_returning, insert will not return the autoincremented primary key. You need to call currval or lastval manually using the same connection to get the value, or use nextval to get the value to use before inserting.