You are on page 1of 17

Sequel:

The Database Toolkit for Ruby


Jeremy Evans

History
Originally developed by Sharon Rosner
March 2007: 0.0.1 - First release
January 2008: 1.0 - Split into sequel, sequel_core, and sequel_model gems
February 2008: 1.2 - I started using Sequel
March 2008: 1.3 - Model associations; I became developer, then maintainer of Sequel
April 2008: 1.4 - Eager loading; sequel and sequel_model gems merged
April 2008: 1.5 - Dataset graphing; much deprecation

History (2)
May 2008: 2.0 - Expression filters; deprecated method removal; massive code cleanup and
documentation updates
July 2008: 2.3 - Jruby/Ruby 1.9 support; sequel_core and sequel gems merged
August 2008: 2.4 - Bound variable/Prepared Statements; Master/Slave database and sharding
Since: Many features and bug fixes

sequel_core vs sequel_model
NOT *-core vs. *-more
sequel_core:
Dataset-centric, returns plain hashes
Basically a ruby DSL for SQL
Good for aggregate reporting, dealing with sets of objects
Also houses the adapters, core extensions, connection pool, migrations, and some utilities
sequel_model:
Object-centric, returns model objects
An ORM built on top of sequel-core
Good for dealing with individual objects
Also houses the string inflection methods
Model classes proxy many methods to their underlying dataset, so you get the benefits of sequel-
core when using sequel-model

Database Support
13 supported adapters: ADO, DataObjects, DB2, DBI, Firebird, Informix, JDBC, MySQL, ODBC,
OpenBase, Oracle, PostgreSQL and SQLite3
Some adapters support multiple databases: DataObjects, JDBC
Some databases are supported by multiple adapters: MySQL, PostgreSQL, SQLite
PostgreSQL adapter can use pg, postgres, or postgres-pr driver

Adding adapters
Adding additional adapters is pretty easy
Need to define:
Database#connect method that returns an adapter-specific connection object
Database#disconnect_connection method that disconnects adapter-specific connection object
Database#execute method that runs SQL against the database
Database#dataset method that returns a dataset subclass instance for the database
Dataset#fetch_rows method that yields hashes with symbol keys
Potentially, that's it
About 1/3 of Sequel code is in the adapters

Adding adapters (2)


However, Dataset#delete and Dataset#update should return number of rows deleted/updated
And Dataset#insert should return the primary key value of the row inserted (if any)
Generally this is done by Database#execute or a related method
If Sequel already supports the underlying database, just include the shared adapter code
Otherwise, you need to deal with SQL syntax issues
Running the integration tests is a good way to check support

Sequel::Database
uri = 'postgres://user:pass@host/database'
DB = Sequel.connect(uri)
DB = Sequel.postgres(database, :host =>host)
DB = Sequel.sqlite # Memory database

Represents a virtual database connection


Uses a (not-shared) connection pool internally
Mainly used to:
Modify the database schema
Set defaults for datasets (e.g. quoting)
Create datasets
Setup SQL loggers
Handle transactions
Execute SQL directly

Sequel::Database (2)
# Set defaults for future datasets
DB.quote_identifiers = true
# Create Datasets
dataset = DB[:attendees]
# Setup SQL Loggers
DB.loggers << Logger.new($stdout)
# Handle transactions: block required, no way
# to leave a transaction open indefinitely
DB.transaction {
# Execute SQL directly
rows = DB['SELECT * FROM ...'].all
DB["INSERT ..."].insert
DB["DELETE ..."].delete
DB << "SET ..." }

Connection Pooling
Sequel has thread-safe connection pooling
No need for manual cleanup
Only way to get a connection is through a block
Block ensures the connection is returned to the pool before it exits
Makes it impossible to leak connections
Connection not checked out until final SQL string is ready
Connection returned as soon as iteration of results is finished
This allows for much better concurrency

Sequel::Dataset
Represents an SQL query, or more generally, an abstract set of rows/objects
Most methods returned modified copies, functional style
Don't need to worry about the order of methods, usually
Build your SQL query by chaining methods

DB[:table].limit(5, 2).order(:column4).
select(:column1, :column2).
filter(:column3 =>0..100).all

Fetching Rows
#each iterates over the returned rows
Enumerable is included
Rows are returned as hashes with symbol keys
Can set an arbitrary proc to call with the hash before yielding (how models are implemented)
#all returns all rows as an array of hashes
No caching is done
If you don't want two identical queries for the same data, store the results of #all in a variable, and
use that variable later

Dataset inserting, updating, and deleting


#insert inserts records
#update updates records
#delete deletes records

DB[:attendees].insert(:name => 'Jeremy Evans')


DB[:attendees].filter(:confirmed => nil).
update(:confirmed => true)
DB[:attendees].filter(:paid => false).delete

Dataset Filtering
ds = DB[:attendees]
# Strings
ds.filter('n = 1') # n = 1
ds.filter('n > ?', 'M') # n > 'M'
# Hashes
ds.filter(:n =>1) # n = 1
ds.filter(:n =>nil) # n IS NULL
ds.filter(:fn =>'ln') # fn = 'ln'
ds.filter(:fn =>:ln) # fn = ln
ds.filter(:n =>[1, 2]) # n IN (1, 2)
ds.filter(:n =>1..2) # n >= 1 AND n <= 2

More Advanced Filtering


ds.filter(:p * :q + 1 < :r) # p * q + 1 < r
ds.filter(:p / (:q + 1) >= :r) # p / (q + 1) >= r
ds.filter({:p => 3} | :r) # p = 3 OR r
ds.filter(~{:p => 'A'} & :r) # p != 'A' AND r
ds.filter(~:p) # NOT p
ds.filter(~(~{:p =>:q} & :r)) # p = q OR NOT r
ds.filter(:p =>ds.select(:q).filter(:r))
# p IN (SELECT q FROM attendees WHERE r)

Sql String Manipulation


Concatenation
ds.select(:p + :q) # p + q
ds.select(:p.sql_string + :q) # p || q
ds.select([:p, :q].sql_string_join) # p || q

Searching
ds.filter(:p.like(:q)) # p LIKE q
ds.filter(:p.like('q', /r/)) # p LIKE 'q' OR p ~ 'r'
ds.filter([:p, :q].sql_string_join.like('Test'))
# (p || q) LIKE 'Test'
Identifier Symbols
As a shortcut, sequel allows you to use plain symbols to signify qualified and/or aliased columns:
:table__column => table.column
:column___alias => column AS alias
:table__column___alias => table.column AS alias
You can use methods to do the same thing, if you want:
:column.qualify(:table)
:column.as(:alias)
:column.qualify(:table).as(:alias)
Can also be used for schemas (:schema__table)

Dataset Joining
ds = DB[:attendees].
join_table(:inner, :events, :id =>:event_id)
# FROM attendees INNER JOIN events
# ON (events.id = attendees.event_id)

Uses implicit qualification to reduce verbosity


Unqualified keys are qualified with the table you are joining
Unqualified values are qualified with the last table joined, or first table if no table previously
joined

Dataset Joining (2)


ds = ds.join(:locations, :id =>:location_id)
# ... INNER JOIN locations ON
# (locations.id = events.location_id)

ds = ds.left_outer_join(:caterers, \
:id =>:events__caterer_id)
# ... LEFT OUTER JOIN caterers ON
# (caterers.id = events.caterer.id)

Need to qualify table names if implicit qualification would be incorrect


Can use helper methods instead of join_table: join/inner_join or (left|right|full)_outer_join

Join Clobbering
# attendees: id, name, event_id
# events: id, name
ds = DB[:attendees].join(:events, :id =>:event_id)
ds.all
# => [{:id=>event.id, :name=>event.name}, ...]

Like SQL, returns all columns from both tables unless you choose which columns to select
SQL generally returns rows as arrays, so it's possible to differentiate columns that have the same
name but are in separate tables
Sequel returns rows as hashes, so identical names will clobber each other (last one wins)
This makes a join problematic if the tables share column names

Join Clobbering (2)


ds.select(:attendees__name___attendee,
:events__name___event)
ds.all
# => [{:attendee=>attendees.name, \
# :event=>events.name}, ...]

You can use Dataset#select to restrict the columns returned, and/or to alias them to eliminate
clobbbering
But that's ugly and cumbersome
There's got to be a better way!

Dataset Graphing
# attendees: id, name, event_id
# events: id, name
ds = DB[:attendees].graph(:events, :id =>:event_id)
ds.all
# => [{:attendees=>{...}, :events=>{...}}, ...]

Splits resulting hashes into subhashes per table


Eliminates clobbering by automatically aliasing columns as necessary
You can manually change the aliases via (add|set)_graph_aliases

Sequel::Model
Allows easily adding methods to datasets and returned objects
Each model class is associated with a single dataset
The model class object proxies many methods to its dataset
Model instances represent individual rows in the dataset
The basics are similar to AR and DM

class Attendee < Sequel::Model; end


class Event < Sequel::Model(:events); end
class Foo < Sequel::Model
set_dataset db[:bar].join(:baz, :id =>:baz_id)
end

Model Associations
Player.many_to_one :team
Team.one_to_many :players
Player.first.team
# SELECT * FROM team WHERE id = #{player.team_id}
Team.first.players
# SELECT * FROM players WHERE team_id = #{team.id}
Attendee.many_to_many :events
Attendee.first.events
# SELECT events.* FROM events INNER JOIN
# attendee_events ON attendee_events.event_id=events.id
# AND attendee_events.attendee_id = #{attendee.id}

No one_to_one, but available as an option to one_to_many

No Proxies for Associations


Player.many_to_one :team
player.team # Team or nil
player.team = team

Team.one_to_many :players
team.players # Array of Players
team.players_dataset # Sequel::Dataset for this
# team's players
team.add_player(player)
team.remove_player(player)
team.remove_all_players

No Proxies for Associations (2)


Proxies make the design more complex
The dataset returned by the association_dataset method does most of what you would want from a
proxy (further filtering, reordering, etc.)
Association add/remove methods are simple to understand
The add/remove methods only affect object's relationships
team.remove_player(player) removes the player from the team, it doesn't delete the player
Bottom line: easier to understand, less magical

No Proxies for Associations (3)


Main complaint: association_dataset method looks ugly
Solution: Use multiple associations (see :clone option)
Using association_dataset in multiple places for the same reason is not DRY
Results returned by association_dataset are not cached, unlike regular association methods
DRY up your code by adding a real association
Using a real association means caching is done correctly, and the API is nicer
Having many associations for the same type of object is not a bad thing, it leads to more
descriptive code

Association Options
There are lots of options: 27 currently, not counting ones specific to certain associations
Most are only useful in fairly rare circumstances, but if you have that circumstance...
Common ones: :select, :order, :limit (also used for offsets), :conditions, :class (takes class or
name), :read_only
Association methods take blocks:

# Only return players with 20 or more goals


Team.one_to_many(:high_scorers, :class =>Player){|ds|
ds.filter{|o| o.goals >= 20}}

Association Options (2)


Many options affect eager loading, which is coming up
5 callback options: before/after add/remove, and after load
Can use :extend option to extend the association_dataset with a module:

module SameName
def self_titled
first(:name=>model_object.name)
end end
Artist.one_to_many :albums, :extend=>SameName
Artist.first.albums_dataset.self_titled

Overriding Association Methods


Association add/remove or getter/setter methods are designed to be easy to override
These methods come in pairs:
add_association and remove_association: Handle caching, callbacks, and other stuff
_add_association and _remove_association: Do the actual database work
Same for many_to_one setter: association= and _association=
Leading underscore methods are private
All of these methods can be overridden and super can be used (the same is true of the column
accessor methods)

Eager Loading
Two separate methods: eager and eager_graph
eager loads each table separately
eager_graph does joins
Argument structure similar to AR's :include
Can combine the two, to a certain extent:
Works fine: eager(:blah).eager_graph(:bars, :foos)
Works fine: eager(:blah=>:bars).eager_graph(:foos=>:bazs)
Problematic: eager(:blah=>:bars).eager_graph(:blah=>:foos)
Possibly fixable, but no one has complained...
Why two methods?:
User choice (performance, explicitness)
Sequel does not parse SQL!
Advanced Associations
Sequel allows you full control over associations via the :dataset
option
# AR has_many :through=>has_many
# Firm one_to_many Clients one_to_many Invoices
Firm.one_to_many :invoices, :dataset =>proc{
Invoice.eager_graph(:client).
filter(:client__firm_id =>pk)}

# Joining on any of multiple keys to a single key


# through a third table
Artist.one_to_many :songs, :dataset=>proc{
Song.select(:songs.*).join(Lyric, :id =>:lyric_id,
id=>[:composer_id, :arranger_id, :vocalist_id,
:lyricist_id])}

Eager Loading of Advanced Associations


Sequel allows you full control via the :eager_loader option
Firm.one_to_many :invoices, :eager_loader =>(
proc{|key_hash, firms, associations|
# key_hash: {:id=>{1=>[firm1], 2=>[firm2]},
# :type_id=>{1=>[firm1, firm2], 2=>[firm3, firm4]}}
id_map = key_hash[Firm.primary_key]
firms.each{|f| f.associations[:invoices] = []}
Invoice.eager_graph(:client).
filter(:client__firm_id =>id_map.keys).all{|i|
id_map[inv.client.firm_id].each {|firm|
firm.associations[:invoices] << inv
}}})

Polymorphic Associations - DRY Too Far


Only one advantage to polymorphic associations: fewer tables
When you break it down, that's it, as all polymorphic associations can be broken down into simpler
relationships simply by using more tables
Using separate tables to express relationships between different tables is a good thing, for the same
reason that using separate tables for different entities is a good thing (even if the schemas are the
same)
They are more complex, for no real benefit, and they break referential integrity
There's a Sequel plugin available if you are stuck with a legacy schema that uses them

Advanced Associations: What Else Can you Do?


Polymorphic associations (the plugin just uses the options and techniques already discussed, no
funny stuff)
Join on multiple keys
Load all ancestors or descendants in a tree structure
All of these associations can be eagerly loaded
See the Advanced Associations RDoc for example code

Validations
Philosophy: Only useful to display nice error messages to the user, actual data integrity should be
handled by the database
9 Standard validations available: acceptance_of, confirmation_of, format_of, inclusion_of,
length_of, not_string, numericality_of, presence_of, uniqueness_of
Easy shorthand via validates:

validates do
format_of :a, :with=>/\A.*@.*\..*\z/
uniqueness_of :a_id, :b_id # both unique
uniqueness_of [:a_id, :b_id] # combination
end

Custom Validations
validates_each: Backbone of defining the standard validations and any custom ones
Requires a block (called with the object, attribute(s), and attribute value(s)), accepts multiple
attributes arguments and a hash of options
Built-in support for :if, :allow_missing, :allow_nil, and :allow_blank options
Can use arrays of attributes in addition to individual attributes

validates_each(:amount, :if=>:confirmed?){|o,a,v|
o.errors[a] << "is less than 100" if v < 100
end

Hooks
Philosophy: Should not be used for data integrity, use a database trigger for that
Called before or after certain model actions: initialize (after only), save, create, update, destroy,
and validation
Arbitrary hook types can be defined via add_hook_type, useful for plugins (all standard hooks are
implemented using it)
Can use a symbol specifying an instance method, or a proc
Class methods add hooks, instance methods call them
Returning false from any hook cancels the rest of the hook chain

Dataset Pagination
Built in support via Dataset#paginate and Dataset#each_page
Dataset#paginate applies a limit and offset and returns a dataset with helper methods such as
next_page and prev_page
Useful for building a search engine or website showing a defined number of records per page
Dataset#each_page yields paginated datasets of a given length starting with page 1
Useful for processing all records, but only loading a given number at a time due to memory
constraints
You should probably run #each_page inside of a transaction unless you know what you are doing

Model Caching
Built in support via Model.set_cache
Caches to any object with the following API:
#set(key, object, seconds): Store object with key for amount of seconds
#get(key): Return object with matching key, or nil if there is no object
This API is used by Ruby-MemCache, so it works with that by default
The cache is only used when Model.[] is called with the primary key

Schema Definition
DB.create_table(:attendees) do
primary_key :id # integer/serial/identity
String :name # varchar(255)/text
column :registered_at, DateTime # timestamp
money :price # money
foreign_key :event_id, :events
index :name, :unique =>true
index [:name, :event_id]
constraint :b, ~{:price =>0} # price != 0
check{|o| o.registered_at > '2008-12-31'}
primary_key [:name, :price] # composite pk
foreign_key [:name, :price], :blah, \
:key => [:att_name, :att_price] # composite fk
end

Schema Modification
DB.alter_table(:attendees) do
add_column :confirmed, :boolean, :null =>false
drop_constraint :b
add_constraint :c do |o| # price != 0 if confirmed
{:confirmed =>~{:price=>0}}.case(true)
end
add_foreign_key :foo_id, :foos
add_primary_key :id
rename_column :id, :attendee_id
drop_column :id
set_column_default :name, 'Jeremy'
set_column_type :price, Numeric
set_column_allow_null :confirmed, true
end
Schema Modification (2)
DB.add_column :attendees, :confirmed, :boolean
DB.add_index :attendees, :confirmed
DB.drop_index :attendees, :confirmed
DB.rename_column :attendees, :id, :attendee_id
DB.drop_column :attendees, :attendee_id
DB.set_column_default :attendees, :name, 'Jeremy'
DB.set_column_type :attendees, :price, Numeric
DB.rename_table :attendees, :people
DB.drop_table :people
DB.create_view :ac, DB[:attendees].where(:confirmed)
DB.drop_view :ac

Migrations
Similar to ActiveRecord migrations
Migration class proxies most methods to Database

class CreateAttendees < Sequel::Migration


def up
create_table(:attendees) {
primary_key :id
String :name }
end
def down
drop_table(:attendees)
end
end # CreateAttendees.apply(DB, :up)

Migrator
Migrations are just classes that can be used individually via an API
Sequel::Migrator deals with a directory of files containing migrations, similar to AR
Filenames should start with integer representing state of migration, similar to AR before
timestamped migrations
You can use the Migrator API, or the sequel command line tool -m switch

Sequel::Migrator.apply(DB, '.') # To current version


Sequel::Migrator.apply(DB, '.', 5, 1) # To 5 from 1
# $ sequel -m /path/to/migrations -M 5 postgres://...

Migration Philosophy
Migrations should preferably only do schema modification, no data modification unless necessary
Migrations should be self contained, and not reference any part of your app (such as your models)
Migrations are deliberately not timestamped:
The whole point of timestamped migrations was to allow multiple teams working on the
same app to add migrations to different branches without requiring manual intervention
when merging
That is a poor idea, as there is no guarantee that the modifications will not conflict
Using integer versions instead of timestamps makes it so the maintainer has to take manual
effort when merging branches with different migrations, which is a good thing

Model Schemas
Models can use set_schema/create_table for a DataMapper-like way of handling things
This isn't recommended, as it makes schema changes difficult
Use migrations instead for any production app
For test code/examples, it is OK
However, even then I prefer using the standard database schema methods before the model
definition
Sequel's philosophy is that the model is simply a nice front end to working with the database, not
that the database is just a place to store the model's data

Bound Variables
Potentially faster depending on the query (no literalization of large objects)
Don't assume better performance, and don't use without profiling/benchmarking
Use :$blah placeholders on all databases
Native support on PostgreSQL, JDBC, and SQLite, others have emulated support

ds = DB[:items].filter(:name =>:$n)
ds.call(:select, :n =>'Jim')
ds.call(:update, {:n =>'Jim', :new_n =>'Bob'}, \
:name =>:$new_n)

Prepared Statements
Similar to bound variable support:
Potentially faster due to reduced literalization and query plan caching
Only use after profiling/benchmarking
Uses same :$blah placeholders
Native support on PostgreSQL, JDBC, SQLite, and MySQL, emulated support on other
databases

ds = DB[:items].filter(:name =>:$n)
ps = ds.prepare(:select, :select_by_name)
ps.call(:n =>'Jim')
DB.call(:select_by_name, :n =>'Jim')
ps2 = ds.prepare(:update, :update_name, \
:name =>:$new_n)
ps2.call(:n =>'Jim', :new_n =>'Bob')

Stored Procedures
Only supported in the MySQL and JDBC adapters
Similar to prepared statement support

DB[:table].call_sproc(:select, :mysp, \
'param1', 'param2')
sp = DB[:table].prepare_sproc(:select, :mysp)
sp.call('param1', 'param2')
sp.call('param3', 'param4')

Master/Slave Databases
Sequel has built in support for master/slave database configurations
SELECT queries go to slave databases, all other requests go to master database
No code modifications are required, just need to modify the Sequel.connect call

DB=Sequel.connect('postgres://m/db', :servers => \


{:read_only =>proc{|db| :host =>db.slave}}, \
:max_connections =>16) # 4 connections per slave
# Master host: m; slave hosts: s1, s2, s3, s4
def DB.slave; "s#{((@current_host||=-1)+=1)%4}" end

Sharding/Partitioning
Sequel makes it simple to deal with a sharded/partitioned database setup
Basically, you can set any standard query to use whichever server you specify, using the
Dataset#server method
Implemented in the generic connection pool, so all adapters are supported

s = {}
%w'a b c d'.each{|x| s[x.to_sym] = {:host=>"s#{x}"}}
DB=Sequel.connect('postgres://m/db', :servers=>s)
DB[:table].server(:a).filter(:num=>10).all
DB[:table].server(:b).filter(:num=>100).delete

Core Class Methods


Sequel adds methods to the core classes, probably more than it should
Some of these are designed to be used by the user in constructing queries, some are general and
designed to make the implementation easier, I'll cover the former

h = {:a =>1, :b =>2} # or [[:a,1],[:b,2]]


h.sql_expr # a = 1 AND b = 2
h.sql_negate # a != 1 AND b != 2
h.sql_or # a = 1 OR b = 2
~h # a != 1 OR b != 2
h.case(0, :c) # CASE c WHEN a THEN 1
# WHEN b THEN 2 ELSE 0 END
Core Class Methods (2)
# (a,b) IN (1=2 AND 3=4)
filter([:a,:b]=>[[1,2],[3,4]])
# (a,b) IN ((1,2),(3,4))
filter([:a,:b]=>[[1,2],[3,4]].sql_array)
{:a=>1} & :b # a = 1 AND b
{:a=>1} | :b # a = 1 OR b
:a.as(:b) # a AS b
'a'.as(:b) # 'a' AS b
:a.cast(:integer) # CAST(a AS integer)
:a.cast_numeric << 1 # CAST(a AS integer) << 1
'1.0'.cast_numeric(:real) # CAST('1.0' AS real)
:a.cast_string + :b # CAST(a AS varchar(255)) || b

Core Class Methods (3)


'a' # 'a'
'a'.lit # a
'a'.to_sequel_blob # Needed for dealing with blob
# columns on most databases
:a + :b - :c * :d / :e # (a + b) - ((c * d) / e)
:a & :b | ~:c # (a AND b) OR NOT c
:a.sql_function # a()
:a.sql_function(:b, :c) # a(b, c)
:a.extract(:year) # EXTRACT(year FROM a)
# Except on Ruby 1.9:
:a < 'a'; :b >= 1 # a < 'a'; b >= 1
:a[] # a()
:a[:b, :c] # a(b, c)

Core Class Methods (4)


:a__b # a.b
:a__b.identifier # a__b
:a.qualify(:b) # b.a
:a.qualify(:b__a) # b.a.a
:b__c.qualify(:a) # a.b.c
:a.like(:b) # PostgreSQL: a LIKE b
:a.like('b') # MySQL: a LIKE BINARY 'b'
:a.like(/b/) # PostgreSQL: a ~ 'b'
:a.like(/b/) # MySQL: a REGEXP BINARY 'b'
:a.ilike(:b) # MySQL: a LIKE b
:a.ilike('b') # PostgreSQL: a ILIKE 'b'
:a.ilike(/b/) # PostgreSQL: a ~* 'b'
:a.ilike(/b/) # MySQL: a REGEXP 'b'
:a.sql_number << 1 # a << 1
sequel command line tool
Provide database connection string or path to yaml file as argument
Gives you an irb shell with DB constant already defined
Options:
-E option echos all SQL used to stdout
-e option specifies the environment to use in the yaml file
-m option runs the migrator with the path to the given directory
-M tells the migrator which version to migrate to
-l option logs all SQL to a file
-L option loads all files (usually model files) in a given directory
Great for quick access and seeing how Sequel works

Using Sequel in Web Applications


Sequel is not designed with web applications in mind, though it does work well with them
The default options are a bit strict, you may want to relax them
Model.raise_on_typecast_failure: Set to false and use validates_not_string to check that non
string columns don't have string values
Model.raise_on_save_failure: Set to false if you want save to return nil or false if validations
fail, instead of raising an error.
Model.strict_param_setting: Set to false to not raise an error if a setter method called via
mass assignment is restricted or not defined
Sequel gives you lots of options for setting model values from user input:
set_only: Given a user provided hash, restricts access to only the attributes you provide
(recommended)
set_restricted: Restricts access to the attributes you provide, in addition to the defaults
set: Uses the defaults set by set_allowed_columns and set_restricted_columns class methods

Lightweight?
Requires: bigdecimal, bigdecimal/util, date, enumerator, thread, time, uri, yaml
Possible to trim some of those out with minor hacking
RAM Usage (VSZ change from requiring on i386):
Sequel 2.11.0:
sequel_core: 2064KB
sequel: 3700KB
DataMapper 0.9.10:
dm-core: 5560KB
dm-more: 14244KB
ActiveRecord
2.2.2: 12792KB
2.3.1rc2: 6588KB (nothing autoloaded)
2.3.1rc2: 9724KB (everything loaded)

Current Status
Current version: 2.11.0
Releases are generally done once a month, usually in the first week of the month
Generally there are no open bugs or features planned when a release is made
Every month there are small and not-so-small features that get added, mostly based on user's
suggestions/code
Bugs on the tracker get priority, and generally are dealt with quickly (1 day-1 week)
Bugs 200-260: 48 Fixed, 5 WontFix, rest invalid/spam
The empty bug tracker is the SOP

Contributing
Contributing is very easy, no +1s required
No bureaucracy, just notify me via IRC (#sequel), the Google Group, the bug tracker, or github
Feedback for all patches is prompt, no exceptions
Patches should include specs, but they aren't required, I'll accept patches without specs and add the
specs myself (if an obvious bug or a feature I like)
Feature requests are denied more often for philosophical reasons than for poor implementation
If I think a feature is good and the implementation is not, I'll rewrite the implementation myself

The Future: Sequel 3.0


No Grand Refactoring, break compatibility only when necessary
Move some current features into optional plugins
Dataset pagination
Dataset#query
Sequel::PrettyTable
Model caching
Model set_schema/create_table
Dataset transforms/Model serialization
All core extensions not related to the SQL DSL
Model hooks and validations class methods
Eliminate some cruft and aliases, modify some minor features/APIs
Deprecation warnings for all major changes in 2.12

Questions?

You might also like