Sybase iAnywhere SQL AAnywhere Mobile and Embedded Database

I'd rather play golf

Thoughts on data management, autonomic computing, and self-managing database systems.

header image

Call for Papers: 2012 DBTEST Workshop, Scottsdale, AZ

By Glenn Paulley on February 7th, 2012

This year’s DBTEST Workshop, being organized by Florian Waas of EMC/Greenplum and Eric Lo of Hong Kong Polytechnic University, is co-located with the 2012 ACM SIGMOD Conference and will be held on Monday, 21 May 2012 in Scottsdale, Arizona.

Eric and Florian, along with the workshop’s program committee which includes Guy Lohman (IBM), Jens Dittrich (University of the Saarland), Meikel Poess (Oracle), and myself, are soliciting six-page extended abstracts on a wide variety of topics related to database application and database system testing:

Besides technical contributions around test ideas, test frameworks and methodologies, we also solicit vision papers and war stories regarding quality issues. As somebody who is actively involved with the aspects of actually building data management systems I’m certain you either have specific systems knowledge that could be help and advice for other researchers or open questions that may inspire colleagues to start researching a topic! We count on practitioners to help us enrich the program with war stories that help workshop participants recognize common problems.

This year’s workshop features a keynote address from Oxford’s Oege de Moor, the principal developer behind the Semmle code analysis products, which features the query language .QL.

Papers are due Monday, March 12, 2012 (5PM Pacific Time). Notifications of acceptance will be sent on or before April 5, and camera-ready copies are due April 22.

See the Workshop site for additional information.

Comments OffPosted in: Computer Science education · Self-managing database systems · Semmle .QL

Database Self-Management: Taming the Monster

By Glenn Paulley on December 23rd, 2011

This past July, when Duke University’s Shivnath Babu presented his talk on Starfish at the University of Waterloo, Shivnath invited my team to contribute a paper to a special forthcoming issue of the IEEE Technical Bulletin on Data Engineering, to be published this December.

This special issue, edited by Shivnath and Kai-Uwe Sattler of Germany’s Ilmenau University of Technology, has now been published online, and constitutes a collection of experience papers on database self-management. Our contribution is entitled “Database Self-Management: Taming the Monster” [1] and the paper outlines some of the lessons and experiences of the SQL Anywhere development team over the twenty-year history of the SQL Anywhere product. Here’s the abstract:

We describe our experience in the development of self-managing technologies in Sybase SQL Anywhere, a full-featured relational database system that is commonly embedded with applications developed by Independent Software Vendors (ISVs). We illustrate how SQL Anywhere’s embeddability features work in concert to provide a robust data management solution in zero-administration environments.

Our short paper covers various technologies used within SQL Anywhere, and outlines how we have designed them to offer self-managing characteristics. The list of technologies includes our query optimizer, buffer pool management, self-tuning multiprogramming level, and our spatial data support.

This special issue of the IEEE Data Engineering Bulletin is a great read, as the other papers that make up the special issue are authored by other well-known pioneers in self-managing technology, including Surajit Chaudhuri and Nico Bruno of Microsoft Research.

[1] Mohammed Abouzour, Ivan T. Bowman, Peter Bumbulis, David E. DeHaan, Anil K. Goel, Anisoara Nica, G. N. Paulley, and John Smirnios (December 2011). Database Self-Management: Taming the Monster. IEEE Technical Bulletin on Data Engineering 34(4), pp. 6-11.

Comments OffPosted in: Database Management Systems · Self-managing database systems · SQL Anywhere

SQL:2011 is published

By Glenn Paulley on December 16th, 2011

As I mentioned in September, the latest version of the ISO SQL Standard had been in the final draft (FDIS) stage through the autumn, and yesterday, 15 December, SQL:2011 became the official version of the SQL standard and is now available for purchase on the ISO website.

The SQL:2011 standard is not yet available from ANSI’s webstore but I expect it will be available shortly and I will post a comment when copies from ANSI become available.

→ 3 CommentsPosted in: SQL Standard

Using Cassandra as a cloud file system

By Glenn Paulley on November 4th, 2011

On Wednesday this week (November 2) I had the opportunity to listen to Ken Salem of the University of Waterloo present a lecture entitled “A Scalable, Available Storage Tier for RDBMS” to the Database Systems Group at UW’s Cheriton School of Computer Science. This joint work, with co-authors Ashraf Aboulnaga and post-doctoral fellow Rui Liu, considers the use of the Cassandra distributed database system as a robust, distributed filesystem to support a cloud of MySQL database servers. For me, the most interesting part of the project – which is currently nicknamed DBECS (pronounced D-Backs – as in the Arizona DiamondBacks major league baseball team) – is how closely this research project dovetails with the goals of SQL Anywhere’s On Demand Edition (codenamed “Fuji”), particularly its approach towards the use of independent tenant databases, rather than multi-tenancy solutions such as My notes follow:


The objective of the DBECS project is to develop a multi-tenant database management service with elastic scalability of storage capacity, 24×7 operation, and with full transaction and SQL support. In general, “NoSQL” systems (ie. HBase, Cassandra) tradeoff functionality for scaleability/availability, via the implementation of simple key-value stores and atomic access to a single “record”. In contrast, the idea behind DBECS is to start with an established, relational DBMS that provides relatively complete ACID semantics and full SQL support, and see how the software infrastructure can be changed to support higher levels of scalability.

Aside: this is where DBECS mirrors what we at Sybase iAnywhere are trying to accomplish with SQL Anywhere On Demand Edition: offer full SQL and transaction support – indeed, requiring no application changes – and support higher scalability not by increasing the size of a multi-tenant database instance, but by scaling in the number of databases.

With DBECS, the idea is to:

  • Use a MySQL instance to host each tenant database;
  • Use Cassandra as the cloud storage tier to provide 24×7 availability and geographic distribution.

Benefits of this approach:

  • scalable, elastic storage capacity and bandwidth;
  • scalable, elastic tenancy;
  • highly available storage tier with no single point of failure;
  • full-function SQL support with ACID transactions.

There are, however, things that DBECS does not provide; in particular, there are no inherent improvements to the scaling of an individual, hosted MySQL database. However, an always-available storage tier might simplify high-availability mechanisms, and could be exploited to restore the database quickly in case of corruption.

The DBECS Prototype

The prototype system uses an everyday MySQL server (using INNODB) to host a tenant database instance, hence providing full MySQL facilities including transaction atomicity and durability without the need for application changes. (Aside: this is the same idea with SQL Anywhere Fuji).

Rather than use local disk for the database storage, however, the DBECS prototype uses Cassandra as a glorified, high-availability file system. The advantages of Cassandra are:

  • Cassandra uses a multi-master replication paradigm;
  • supports geographically-distributed, multiple data centers;
  • has partition tolerance;
  • supports fine grained (per operation) control of the consistency/performance tradeoff; that is, the application (MySQL in this case) can specify the desired semantics of each read and write operation (e.g. slow and consistent, fast but not ACID);
  • Cassandra supports client-controlled update serialization; in DBECS, the client is a MySQL DBMS, which understands something about updates and serialization.

An underlying assumption with Cassandra is that there is a single writer (MySQL instance) for each database. This means that the consistency issue is staleness issue, and not a concurrent write operation issue.

Using Cassandra

Cassandra stores “column families”, which are tables of semi-structued records accessed by key; the primitive operations in Cassandra are reading a field from a record, and updating a field in a record. Cassandra records are replicated and distributed by hashing keys.

Cassandra supports per-operation consistency specification. For example, Cassandra supports write(1) versus write(ALL) semantics. With the former, Cassandra distributes the write to all replicas, but the operation will return as soon as one copy has acknowledged the write operation. In contrast, write(ALL) semantics involve waiting until all copies have been written. Similarly, with read(1) versus read(ALL), the former returns a record from any copy; while the latter performs a read from each replica, and returns the record with the most recent timestamp. While read(ALL) provides “quorum consensus”, the issue with read(ALL) is not only the loss of performance due to the read latency of all of the replica copies of the record, but also that it can suffer from an availability problem: if not all replicas are available, the read operation will block. Other variations, such as read(2), are also possible.

Cassandara as a DBMS storage tier

As stated above, DBECS uses MySQL with its INNODB storage layer. INNODB issues block reads/writes to the underlying filesystem. With DBECS, a shim was added to intercept read/write calls to the I/O interface, and replace them with Cassandra read/write API calls to the Cassandra storage layer. In a nutshell, DBECS treats a Cassandra column family as a big (virtual) disk, addressable by block ID; so MySQL INNODB blocks are mapped to Casandra records, keyed by block ID where a key for a block is the triple < dbname , file number, block number >.

What’s interesting with this design are the inherent tradeoffs. For example,
write(ALL) coupled with read(1) can work but write performance is poor, and results in an availability problem since all copies must be available or the I/O operation will block. Ideally, one would want to use read(1) and write(1) exclusively, but the issue there is that read staleness (inconsistency) is interpreted by INNODB as a corrupt database. The problem is that with read(1)/write(1), you don’t get a guarantee that what the MySQL server wrote will be read back from the same node. With write(1) Cassandra only waits for one write operation before returning; eventually all writes will complete, but a read not from the completed write node will return stale data.

To counter this problem, Ken, Ashraf, and Rui consider the introduction of “optimistic I/O” semantics. Cassandra supports version numbers; so blocks are augmented with version numbers and timestamps to check for staleness. If a read operation matches the expected version number, the read is successful, but if a mismatch occurs then the read(1) operation can be retried (aggressive) or read(ALL) can be used (conservative).

Cassandra will detect and recover from node failures. However, with the above model, failures are not always hidden from the client (the MySQL INNODB storage layer). Moreover, there is a question as to the reliability and safety of write(1). To improve the fault tolerance, DBECS has added support for client-controlled I/O synchronization for better tolerance of Cassandra failures. When INNODB issues a file system sync call (fsync) to the underlying file system, Cassandra issues a “Cassandra sync” to try to guarantee durability by waiting long enough for multiple copies to be updated.

DBECS is an interesting project whose premise, like SQL Anywhere On Demand Edition, is the support and scalability of individual tenant databases in a cloud environment. Ken assures me that there is nothing in DBECS specific to MySQL – any database system would do, as long as the low-level I/O API calls could be replaced with the corresponding Cassandra API calls.

I hope to see this work in the literature in short order.

→ 2 CommentsPosted in: Cloud computing · MySQL · SQL Anywhere

SQL Anywhere at Insight Out

By Glenn Paulley on October 31st, 2011

On 20 October I had the privilege to address the 2011 Insight Out Conference in Tokyo, Japan, where the majority of attendees are Japanese business application developers and the speakers are drawn from a cross-section of the database industry, with Oracle, Microsoft, PostgreSQL, MySQL, and Sybase represented. Insight Out is spearheaded by Hiroki Mizokami of Insight Technology.

For the talk, I decided to do a one-hour overview of the self-managing features of SQL Anywhere 12, in particular highlighting two new server features in SQL Anywhere 12: auto-tuning of the server’s multiprogramming level, and the self-healing abilities to find and correct statistical errors with our self-tuning column histograms. I also added a very brief overview of SQL Anywhere On Demand Edition, codenamed “Fuji”, to further illustrate the need for self-managing data management when it is combined with cloud technology. Slides for the talk in English are here, and the same slides in Japanese are here.

As part of the motivation for the talk, I used the example of IvanAnywhere, Ian McHardy’s home-grown telepresence robot that my colleague Ivan Bowman uses to telecommute from his home office in Nova Scotia. My sincere thanks to our iAnywhere colleagues in Japan for translating the audio soundtrack for the Ivan Anywhere video, and the soundtrack for the Fuji “Gone Fishing Software” video, into Japanese – that effort was well worthwhile, making the content so much more accessible for the conference attendees.

In addition, my thanks to my iAnywhere Solutions K.K. colleagues for their warm hospitality during my stay in Japan. It was without question the best business trip of my 25-year career.

Comments OffPosted in: Cloud computing · SQL Anywhere

Alon Halevy: Bringing (Web) Databases to the Masses

By Glenn Paulley on October 13th, 2011

Yesterday afternoon I listened to Alon Halevy of Google Research present a lecture at the Sybase-sponsored Database Seminar Series at the University of Waterloo. Alon’s talk was entitled “Bringing (Web) Databases to the Masses” – here is Alon’s abstract:

The World-Wide Web contains vast quantities of structured data on a variety of domains, such as hobbies, products and reference data. Moreover, the Web provides a platform that can encourage publishing more data sets from governments and other public organizations and support new data management opportunities, such as effective crisis response, data journalism and crowd-sourcing data sets. To enable such wide-spread dissemination and use of structured data on the Web, we need to create a ecosystem that makes it easier for users to discover, manage, visualize and publish structured data on the Web.

I will describe some of the efforts we are conducting at Google towards this goal and the technical challenges they raise. In particular, I will describe Google Fusion Tables, a service that makes it easy for users to contribute data and visualizations to the Web, and the WebTables Project that attempts to discover high-quality tables on the Web and provide effective search over the resulting collection of 200 million tables.

Alon gave a super-interesting talk, motivating the University of Waterloo students in the audience to dream about what the web could be. Google Fusion Tables and WebTables are steps in that direction. My notes from Alon’s talk follow:

Structured data and the Web

  • there is a huge amount of structured data on the web: reference data, hobbies, products, etc. Currently we are primarily reacting to this existing data.
  • but the web could be a platform for getting more data out: government data, crime data, data regarding water conditions, the list is (nearly) boundless. Hence we could be more proactive in our use of the web.
  • but the web could also enable new kinds of data collection and management: collaboration, crowd-sourcing, real-time data, crisis response, and could then be used to invent a much brighter future

There is currently no single system that will achieve all these goals. Key processes that are required include data cleaning, querying, sharing, integrating, and visualizing. In particular, data visualization is key: data volumes on the web are such that visualization techniques are necessary to be able to synthesize it. Synthesis should be easy – and visualization is a prerequisite. With visualization, one could really tell a story by using a sequence of visualizations.


The challenges in making the web a better place for information sharing and collaboration include:

  • finding high quality structured data
  • getting data out of silos
  • extracting data from web pages
  • databases are hard to use
  • integrating heterogeneous data
  • publishing data is cumbersome

Two projects at Google are in progress that go towards solving these problems:

  • Google Fusion Tables – a database management service for the web; and
  • Google WebTables

Google Fusion Tables

The first goal of Google Fusion Tables is to develop an easy-to-use DBMS that is integrated with the web. Google Fusion Tables key features are:

  • easy upload (CSV, KML, spreadsheets)
  • sharing (even outside your company)
  • visualizations front and center
  • easy publishing

and, perhaps most importantly, for all of these services to available with zero administration (no DBA).

The second goal of Google Fusion Tables is to provide an integrated, poly-structured data cloud – to enable the discovery of others’ data and combine it with your own. Hence users can create tables, then create visualizations and share them. Fusion Tables has an API so application developers can develop applications that populate Fusion Tables, synthesize that data with other shared data on the web, and then visualize the result. From Alon’s talk, there is a significant thrust towards data visualization – the predictable problem is that with massive amounts of data, you must employ visualization techniques in order to analyze it.

It turns out that as part of the Fusion Tables initiative, it sort of “happened” that Google built a GIS “in the cloud” as part of the infrastructure. Challenges included “trickling”: showing only a small number of geometry or geographic features, i.e. points or polygons, from a large data set. To automatically render such data, the software needs to thin polygons, clip them to the window, and style features on the fly – and do it all in under 100ms.

Google WebTables

In his talk, Alon indicated that a 3-yr-old Google study found approximately 14 billion English HTML “tables” (i.e. English-language web content with TABLE tags.) Of these 14 billion, over 99% are “uninteresting”. Part of the problem is that there is a brittle relationship between table values and their semantics; sometimes the semantics are hidden and complicated, and on the web one does not readily have a domain model or context to exploit to analyze data semantics, particularly when you introduce cultural differences and differences in language.

In a previously-published paper (WebTables: Exploring the Relational Web (VLDB 2008)), Google researchers found that of this corpus of 14B raw “tables”, they estimated that 154 million were “good” relations (i.e. they could be used for structured data analysis). WebTables is designed to recover “good” relations from a crawl and enable search – which Alon referred to as discovering a (structured) needle in an (unstructured) haystack. Once these are discovered, one can then build novel applications using that data. To capture semantic meaning, such applications can again rely on the web, which has the same scope of semantic meanings that match the content of the tables themselves, and thus one can use the web to develop ontologies to categorize and analyze the original data.

So of the original 14B “tables”, 154 million are “good” tables, with 2.6 million distinct schemas, and 5.4 million total attributes. With the semantic information from the web, one can join these schemas by similarity in attribute names through synonym discovery.


During the talk, Alon showed a variety of examples that illustrated how one can use Google Fusion Tables to combine data sets and create information from their combination. One example Alon showed is a map that combines earthquake instances since 1973 with current nuclear installations, which conveys a lot of meaningful information very quickly.

A clear thrust of this research are two requirements: ease of use and zero administration. Ideally, creating information should require the only the minimal computing knowledge to accomplish the task. Google is certainly pushing the envelope in that direction. In this concluding remarks, Alon hinted that we can expect more developments in this area over the next while, as Google refines better techniques to discover structured data on the web and unify that data with information from other sources.

Various academic papers on WebTables and Google Fusion Tables have been published in recent conferences:

WebTables: VLDB 2008, VLDB 2009, VLDB 2011
Fusion Tables: SIGMOD 2010, SOCC 2010 – and see
Communications of the ACM, February 2011

Comments OffPosted in: Cloud computing · Computer Science education

Call for Papers – SMDB 2012

By Glenn Paulley on October 10th, 2011

Following six successful, previous workshops, the IEEE Data Engineering Workgroup on Self-Managing Database Systems has announced the 7th International Workshop on Self-Managing Database Systems, which will be co-located with the 2012 IEEE Conference on Data Engineering which will be held in Washington, DC from April 1-5, 2012.

The Workshop is being organized by Alejandro Buchmann of Technische Universität Darmstadt (Germany) and Malu Castellanos of HP-Labs. The PC includes Ken Salem and Ashraf Aboulnaga from the University of Waterloo, Shivnath Babu from Duke University, Guy Lohman and Sam Lightstone from IBM, Natassa Ailamaki from EPF Lausanne, and our own Ani Nica from Sybase Waterloo’s SQL Anywhere engineering team.

Autonomic, or self-managing, systems are a promising approach to achieve the goal of systems that are easier to use and maintain in the face of growing system complexity. A system is considered to be autonomic if it is self-configuring, self-optimizing, self-healing and/or self-protecting. The aim of the SMDB workshop is to provide a forum for researchers from both industry and academia to present and discuss ideas and experiences related to self-management and self-organization in all areas of Information Management (IM) in general. SMDB targets not only classical databases but also the new generation of storage engines such as column stores, key-value stores, and in-memory databases. Beyond databases, SMDB aims to cover autonomic aspects of data-intensive systems represented by large-scale map-reduce (e.g., Hadoop) and cloud environments, where much work on self-management is needed. Last but not least, SMDB seeks to expand its horizons to include self-management of non-traditional, new areas of IM such as social networks, distributed gaming, and peer-to-peer systems.

A slight change in emphasis is proposed for this coming workshop:

For the 2012 SMDB workshop, we want to continue to attract researchers from both the core database and other communities, such as the adaptive and event-based systems communities as enabling technologies for self-managing systems, and data-intensive internet-scale distributed systems, which should benefit from research results in SMDBs.

The plethora of new distributed, web-scale database systems – including the soon-to-be launched Beta of SQL Anywhere On Demand Edition – that have been developed over the past two to three years, and the research behind them, should make for a very interesting workshop program.

Important Dates

  • Paper submission deadline: November 14, 2011
  • Notification of acceptance: December 7, 2011
  • Camera-ready copies due: December 21, 2011
  • Workshop date: April 1, 2012

Comments OffPosted in: Self-managing database systems

SQL/2011 near final draft stage

By Glenn Paulley on September 9th, 2011

The next version of the ISO SQL Standard, SQL/2011, is due to go to FDIS (Final Draft International Standard) ballot for approval by ISO member countries in the next month or so. The international committee that manages the SQL standard’s development is ISO/IEC JTC 1/SC 32 Data Management and Interchange/WG 3 – Database Languages. In 2007, WG3 decided to freeze four of the nine parts of the SQL standard (such as SQL/CLI). The remaining five parts, namely SQL/Framework, SQL/Foundation, SQL/PSM, SQL/Schemata, and SQL/XML, all contain revisions that will be part of the SQL/2011 release.

ISO approval of SQL/2011 as an international standard (IS) is anticipated in Q4 2011 or Q1 2012.

Briefly, the enhancements made to SQL/2011 include:

  • various extensions to WINDOW queries and window aggregate functions, some of which are already present in current versions of SQL Anywhere. These include the following optional language features:
    • T614 – the NTILE window aggregate function
    • T615 – the LEAD and LAG window aggregate functions, along with feature T616, which is the NULL treatment option for the LEAD and LAG functions.
    • T617 – the FIRST_VALUE and LAST_VALUE window aggregate functions, already supported in SQL Anywhere.
    • T618 – the NTH_VALUE window aggregate function
    • T619 – support for nested window functions
    • T620 – support for the GROUPS option in the WINDOW clause.
  • Optional SQL language feature T495, the ability to use the output of an INSERT, UPDATE, DELETE, or MERGE statement as a derived table in an SQL query (or other SQL DML statement) – already supported by SQL Anywhere.
  • the ability to use named parameters in a CALL statement (language feature T521), with or without default parameters (language feature T522). Both of these features are already supported by SQL Anywhere, though using slightly different syntax to denote a named parameter (“=” in SQL Anywhere, “=>” in the SQL standard).
  • Optional SQL language feature F492, the ability to defer or not enforce uniqueness and foreign key constraints.
  • various language features (F860 through F867) to better support the pagination of result sets, required by RESTFUL application paradigms including web-based applications developed using NHibernate or Hibernate. Pagination uses a FETCH clause as part of a SELECT statement, rather than the TOP/START and LIMIT/OFFSET syntax variants supported by SQL Anywhere in existing releases.
  • preliminary support for temporal tables, including system-time temporal tables (feature T180) and application-time temporal tables (feature T181). Comprehensive temporal query support is not included in SQL/2011.
  • a wide array of other minor language extensions and a multitude of bug fixes that clarifies existing SQL semantics from SQL/2008.

→ 1 CommentPosted in: SQL Standard

Registration now open for Ontario Women in Computer Science conference

By Glenn Paulley on August 15th, 2011

Attendees can now register for the 2011 Ontario Celebration of Women in Computing (ONCWIC) conference that will be held at the University of Toronto on 21-22 October 2011.

The Ontario Regional conference is modeled after the well-known Grace Hopper Celebration (GHC) held annually in the USA. The goal of ONCWIC is to provide students, faculty and professional women in Ontario opportunities for networking, sharing and mentoring, similar to those provided by the GHC. It is hoped that ONCWIC will help build the computing community within Ontario, with significant benefits to the attendees and their associated organizations.

The ONCWIC program focuses on research, academic and industry opportunities and networking. A combination of plenary presentations delivered by ICT leaders, plus interactive group discussions, poster sessions and great social events make this event a must for women involved in computing in both industry and academia.

This year’s conference follows a successful and well-received event held at Queen’s University in Kingston in October 2010.

Primary sponsors of this year’s event include Google, IBM Canada Limited, Research in Motion, Royal Bank of Canada, NSERC, MITACS, and the Computer Science department at the University of Toronto.

Thanks to Wendy Powley of Queen’s for sending this my way.

Comments OffPosted in: Computer Science education

Using NHibernate 3.2.0 GA with SQL Anywhere 12.0.1

By Glenn Paulley on August 10th, 2011

The 3.2.0 GA release of the NHibernate object-relational mapping toolkit was recently released on July 30. Thanks to the efforts of Julian Maughan, the base NHibernate distribution contains two SQL Anywhere dialects, named SybaseSQLAnywhere10Dialect.cs and SybaseSQLAnywhere11Dialect.cs, that can be used largely unaltered with SQL Anywhere versions 10 and 11. However, some SQL Anywhere software configuration changes in the 12.0.1 release, and the lack of an NHibernate dialect for SQL Anywhere 12 in the NHibernate 3.2.0 distribution, has led me to re-visit NHibernate support for SQL Anywhere 12.0.1.

In January 2010 I had previously posted an NHibernate dialect for SQL Anywhere 12.0.0 for the NHibernate 2.1.0 release. Version 3 of NHibernate has brought about several changes, including the requirement for Microsoft Visual Studio 2010 if you want to be able to build the distribution yourself – and you’ll need to do so in order to have the best software configuration to work with SQL Anywhere 12.0.1.

The executive summary is as follows:

  • The SQL Anywhere 12 dialect I posted in January 2010 is largely unaltered but for two minor changes. The first is that its class name, SQLAnywhere12Dialect, has been changed to SybaseSQLAnywhere12Dialect, in keeping with the changes to the class names for the dialects for SQL Anywhere 11 and 12 that Julian made for the NHibernate 3.1.0 distribution. The second change is for the dialect to refer to the metadata class SybaseSQLAnywhere11DataBaseMetaData – see my next point.
  • The new metadata class SybaseSQLAnywhere11DataBaseMetaData in the /NHibernate/src/NHibernate/Dialect/Schema directory properly supports the GetReservedWords() API, a problem previously reported by Paul Gibson.
  • A new driver in /Nhibernate/src/NHibernate/Driver, named SybaseSQLAnywhere12Driver, is required to support the changes in SQL Anywhere 12.0.1 to the ADO.NET library names for the SQL Anywhere ADO.NET provider.

Included here is a .zip archive of the new source files, along with an instance of HelloNHibernate that you can use for sanity testing with a recompiled NHibernate 3.2.0 DLL. Note that you’ll need to copy the SQL Anywhere .NET provider DLL, iAnywhere.Data.SQLAnywhere.v4.0.dll, into the HelloNHibernate\bin\Debug directory in order to use the included Visual Studio solution. You’ll also require a SQL Anywhere 12.0.1 server to start the included database, helloseq.db, which contains an Employee table with a sequence as its primary key.


The newly-named SybaseSQLAnywhere12Dialect class builds on the SybaseSQLAnywhere11Dialect class included with the NHibernate 3.2.0 distribution. In the main, the SybaseSQLAnywhere12Dialect class provides support for ISO standard SQL sequences, which are now supported in SQL Anywhere 12. There are two other minor changes, one is support for the TIMESTAMP WITH TIME ZONE data type (also known as DATETIMEOFFSET) and the other, as mentioned above, is to use the repaired metadata class SybaseSQLAnywhere11DataBaseMetaData. Here is the code for SybaseSQLAnywhere12Dialect.cs, which should be placed in the NHibernate/src/NHibernate/Dialect directory:

using System;
using System.Collections;
using System.Data;
using System.Data.Common;
using System.Text.RegularExpressions;
using NHibernate.Dialect.Function;
using NHibernate.Dialect.Lock;
using NHibernate.Dialect.Schema;
using NHibernate.Engine;
using NHibernate.Exceptions;
using NHibernate.Mapping;
using NHibernate.SqlCommand;
using NHibernate.Type;
using NHibernate.Util;
using Environment = NHibernate.Cfg.Environment;
namespace NHibernate.Dialect
	/// <summary>
	/// SQL Dialect for SQL Anywhere 12 - for the NHibernate 3.2.0 distribution
	/// Copyright (C) 2011 Glenn Paulley
        /// Contact:
	/// This NHibernate dialect for SQL Anywhere 12 is a contribution to the NHibernate
        /// open-source project. It is intended to be included in the NHibernate 
        /// distribution and is licensed under LGPL.
	/// This library is free software; you can redistribute it and/or
	/// modify it under the terms of the GNU Lesser General Public
	/// License as published by the Free Software Foundation; either
	/// version 2.1 of the License, or (at your option) any later version.
	/// This library is distributed in the hope that it will be useful,
	/// but WITHOUT ANY WARRANTY; without even the implied warranty of
	/// Lesser General Public License for more details.
	/// You should have received a copy of the GNU Lesser General Public
	/// License along with this library; if not, write to the Free Software
	/// Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
	/// </summary>
        /// <remarks> The SybaseSQLAnywhere12Dialect
        /// uses the SybaseSQLAnywhere11Dialect as its base class.
        /// SybaseSQLAnywhere12Dialect includes support for ISO SQL standard
        /// sequences, which are defined in the catalog table <tt>SYSSEQUENCE</tt>. 
	/// The dialect uses the SybaseSQLAnywhere11MetaData class for metadata API
	/// calls, which correctly supports reserved words defined by SQL Anywhere.
	/// </remarks>
	public class SybaseSQLAnywhere12Dialect : SybaseSQLAnywhere11Dialect
		/// <summary></summary>
		public SybaseSQLAnywhere12Dialect()
		new protected void RegisterKeywords() 
		    RegisterKeyword( "NEAR" );
		    RegisterKeyword( "LIMIT" );
		    RegisterKeyword( "OFFSET" );
		    RegisterKeyword( "DATETIMEOFFSET" );
		new protected void RegisterDateTimeTypeMappings() 
		    RegisterColumnType(DbType.DateTimeOffset, "DATETIMEOFFSET");
		// DDL support ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
		/// <summary> 
		/// SQL Anywhere supports <tt>SEQUENCES</tt> using a primarily SQL Standard 
	        /// syntax. Sequence values can be queried using the <tt>.CURRVAL</tt> identifier, and the next
	        /// value in a sequence can be retrieved using the <tt>.NEXTVAL</tt> identifier. Sequences
	        /// are retained in the SYS.SYSSEQUENCE catalog table. 
		/// </summary>
		public override bool SupportsSequences
			get { return true; }
	        /// <summary>
	        /// Pooled sequences does not refer to the CACHE parameter of the <tt>CREATE SEQUENCE</tt>
	        /// statement, but merely if the DBMS supports sequences that can be incremented or decremented
	        /// by values greater than 1. 
	        /// </summary>
		public override bool SupportsPooledSequences
			get { return true; }
		/// <summary> Get the <tt>SELECT</tt> command used retrieve the names of all sequences.</summary>
		/// <returns> The <tt>SELECT</tt> command; or NULL if sequences are not supported. </returns>
		public override string QuerySequencesString
		public override string GetSequenceNextValString( string sequenceName )
			return "SELECT " + GetSelectSequenceNextValString(sequenceName) + " FROM SYS.DUMMY";
		public override string GetSelectSequenceNextValString( string sequenceName )
			return sequenceName + ".NEXTVAL";
		public override string GetCreateSequenceString( string sequenceName )
			return "CREATE SEQUENCE " + sequenceName; // by default, is START WITH 1 MAXVALUE 2**63-1
		public override string GetDropSequenceString( string sequenceName )
			return "DROP SEQUENCE " + sequenceName;
	        // Informational metadata ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
		public override IDataBaseSchema GetDataBaseSchema( DbConnection connection )
		    return new SybaseSQLAnywhere11DataBaseMetaData( connection );

One final note about the dialect file per se. The SybaseSQLAnywhere10Dialect.cs included in the NHibernate 3.2.0 distribution contains a typo for the mapping of binary types; it references an erroneous type – LONG VARBINARY – rather than the correct LONG BINARY. My error. I’ll try to ensure (via Julian) that this is fixed in a subsequent NHibernate release.

[

→ 9 CommentsPosted in: Database interfaces and persistent objects · NHibernate · SQL Anywhere