This page last changed on Sep 17, 2009 by jlargman.
Due to numerous reported performance issues with SQL Server 2000, it is strongly recommended that you use SQL Server 2005 instead. SQL Server 2000 is not supported. Confluence with SQL Server 2005 is known to not work very well under high load environments. Please see the deadlock issues below

Database Deadlock Issues

Problem

When creating a page, you may encounter deadlocks and get the following error:

 2008-11-11 17:01:21,355 ERROR [http-8080-Processor6] [sf.hibernate.util.JDBCExceptionReporter] logExceptions Transaction (Process ID 55) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
 -- url: /confluence/pages/docreatepage.action | userName: tstcreator3 | action: docreatepage
2008-11-11 17:01:21,355 ERROR [http-8080-Processor6] [sf.hibernate.util.JDBCExceptionReporter] logExceptions Transaction (Process ID 55) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
 -- url: /confluence/pages/docreatepage.action | userName: tstcreator3 | action: docreatepage
2008-11-11 17:01:21,357 ERROR [http-8080-Processor6] [sf.hibernate.impl.SessionImpl] execute Could not synchronize database state with session
 -- url: /confluence/pages/docreatepage.action | userName: tstcreator3 | action: docreatepage
2008-11-11 17:01:21,357 ERROR [http-8080-Processor6] [sf.hibernate.impl.SessionImpl] execute Could not synchronize database state with session
 -- url: /confluence/pages/docreatepage.action | userName: tstcreator3 | action: docreatepage
2008-11-11 17:01:21,359 ERROR [http-8080-Processor6] [atlassian.xwork.interceptors.XWorkTransactionInterceptor] onThrowable Invoking rollback for transaction on action '/pages/docreatepage.action (CreatePageAction.doAdd())' due to throwable: org.springframework.dao.ConcurrencyFailureException: Hibernate operation: could not delete: [com.atlassian.confluence.pages.Draft#14319839]; SQL []; Transaction (Process ID 55) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.; nested exception is java.sql.SQLException: Transaction (Process ID 55) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
 -- url: /confluence/pages/docreatepage.action | userName: tstcreator3
org.springframework.dao.ConcurrencyFailureException: Hibernate operation: could not delete: [com.atlassian.confluence.pages.Draft#14319839]; SQL []; Transaction (Process ID 55) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.; nested exception is java.sql.SQLException: Transaction (Process ID 55) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
Caused by: java.sql.SQLException: Transaction (Process ID 55) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
	at net.sourceforge.jtds.jdbc.SQLDiagnostic.addDiagnostic(SQLDiagnostic.java:367)
	at net.sourceforge.jtds.jdbc.TdsCore.tdsErrorToken(TdsCore.java:2816)
	at net.sourceforge.jtds.jdbc.TdsCore.nextToken(TdsCore.java:2254)
	at net.sourceforge.jtds.jdbc.TdsCore.getMoreResults(TdsCore.java:631)
	at net.sourceforge.jtds.jdbc.JtdsStatement.processResults(JtdsStatement.java:584)
	at net.sourceforge.jtds.jdbc.JtdsStatement.executeSQL(JtdsStatement.java:546)
	at net.sourceforge.jtds.jdbc.JtdsPreparedStatement.executeUpdate(JtdsPreparedStatement.java:505)
	at com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeUpdate(NewProxyPreparedStatement.java:105)
	at net.sf.hibernate.impl.NonBatchingBatcher.addToBatch(NonBatchingBatcher.java:22)
	at net.sf.hibernate.persister.EntityPersister.delete(EntityPersister.java:581)
	at net.sf.hibernate.impl.ScheduledDeletion.execute(ScheduledDeletion.java:29)
	at net.sf.hibernate.impl.SessionImpl.execute(SessionImpl.java:2449)
	at net.sf.hibernate.impl.SessionImpl.executeAll(SessionImpl.java:2435)
	at net.sf.hibernate.impl.SessionImpl.execute(SessionImpl.java:2397)
	at net.sf.hibernate.impl.SessionImpl.flush(SessionImpl.java:2261)
	at net.sf.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:61)

This problem will be more prominent under high load environments when there are more concurrent page creates/edits.

Reason

SQL Server is escalating row locks (in this case deleting a draft from the CONTENT table) to table locks and stopping other transactions from accessing the table.

Solution

Configure your database to use the isolation level, Read Committed with Row Versioning. You can do this by executing the following query:

ALTER DATABASE <database name>
   SET READ_COMMITTED_SNAPSHOT ON
   WITH ROLLBACK IMMEDIATE;

More information on SQL Server isolation levels can be found in the Microsoft documentation.

From version 2.10.x onwards, indexes will be automatically created upon performing an upgrade.
For customers using a previous version, please execute the following DDL against your confluence database:

CREATE NONCLUSTERED INDEX [c_pageid_idx] ON [dbo].[CONTENT]
(
[PAGEID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]

Unicode Characters Not Supported By Default

Problem
Non-ASCII characters will not be displayed by Confluence.

Reason
The default SQL Server dialect uses column types that do not support Unicode, specifically the char, varchar and text column types. See CONF-4786 for details.

Solution
To add Unicode support, use the Unicode SQL Server dialect which uses nchar, nvarchar and ntext column types. Unicode SQL Server dialect has the downside of halving the maximum length of each column from 8000 characters to 4000, as every char is stored in two bytes.

Enable Unicode SQL Server dialect on a new setup, perform these steps prior to 'Stage 3 - Database Connection Setup' of the Database Setup For Any External Database:

  1. Open the <confluence installation folder>\confluence\WEB-INF\classes\database-defaults\mssql.properties file within your Confluence installation folder. In version 2.6 onwards, this file is located in the /confluence/WEB-INF/lib/confluence.x.x.x.jar file. Jar files are similar to zip formats, so you may need to extract the mssql.properties file and place it in \confluence\WEB-INF\classes\database-defaults. You can create the database-defaults directory if it does not exist.
  2. Comment the line: dialect=net.sf.hibernate.dialect.SQLServerDialect
  3. Uncomment the line: #dialect=net.sf.hibernate.dialect.SQLServerIntlDialect
  4. Start the Confluence Setup Wizard

For existing SQL Server instances wishing to enable Unicode support:

  1. From Confluence, create an XML site backup
  2. From your DBA tool, create a full backup of the Confluence schema contents
  3. Stop Confluence
  4. Move your home directory
  5. Drop all tables from the Confluence schema
  6. Change the dialect to Unicode using the above instructions
  7. Follow the 'Stage 3 - Database Connection Setup' of Database Setup For Any External Database to setup the connection again and import the XML backup
  8. Once the Setup Wizard is complete, stop Confluence
  9. Copy any customised content and plugins from your old home directory into the new home directory
  10. Start Confluence

This will not restore previously stored Unicode characters.

SQL Server 2005 and Case-Sensitive Schema Objects.

A default installation of SQL Server 2005 on a Windows system configured for English will be case sensitive for schema objects. That is to say that a table called 'CONTENT' is not the same as a table called 'content'.

Confluence will not run correctly in this case. During installation of SQL Server, be sure to choose a case-insensitive schema.

Case-Sensitive Collation

'Collation' refers to a set of rules that determine how data is sorted and compared. Case sensitivity is one aspect of collation. Other aspects include sensitivity to kana (Japanese script) and to width (single- versus double-byte characters).

Case-sensitive or case-insensitive collation — how should you create your Confluence database? What about when you are migrating your existing Confluence instance from one database to another?

Setting up a New Confluence Instance

For new Confluence instances, we recommend using case-sensitive collation for your Confluence database, which is the default collation type used by many database systems. The Confluence application itself reduces all usernames into lower-case characters before they are stored in the Confluence database. Therefore, 'joebloggs', 'joeBloggs', 'JoeBloggs', etc. will be treated as the same username on a Confluence installation with case-sensitive database collation.

Migrating an Existing Confluence Instance to a Different Database

The default Confluence Standalone configuration uses case-sensitive database collation. This is often the case with databases on several other systems which were created under default conditions. Therefore, if you are migrating from this type of configuration to a new database, we recommend that the new database uses case-sensitive collation. If you use case-insensitive collation, you may encounter data integrity problems after migration (for example, via an XML import) if data stored within your original Confluence site required case-sensitive distinctions.

Information on different collation options in SQL Server 2005 can be found in the Microsoft documentation.

Use jTDS Driver 1.0.3 or Later for JDBC Connection - Reason 1

Problem
When using jTDS driver 1.0.2 or earlier, Confluence may freeze when performing certain functions, and you see a warning like the one displayed below:

\[ERROR\] ActionSupport - \-An error occured while storing the requested page\! <org.springframework.jdbc.UncategorizedSQLException: (Hibernate operation): encountered SQLException
\[The amount of data read from the stream is not = length.\]; nested exception is java.sql.SQLException: The amount of data read from the stream is not = length.>

This can occur with jTDS driver 1.0.2 but is fixed in 1.0.3 - see the jTDS homepage . It will prevent backups from suceeding and lock access to the database when viewing certain pages.

Solution
Upgrade to jTDS driver 1.0.3 or later from the jTDS download page.

Use jTDS Driver 1.0.3 or Later for JDBC Connection - Reason 2

Problem
When using MS SQL Server without the jTDS drivers, you may receive an error similar to:

net.sf.hibernate.LazyInitializationException: Exception initializing proxy: \[com.atlassian.confluence.spaces.SpaceDescription#5\]
 at net.sf.hibernate.proxy.LazyInitializer.initializeWrapExceptions(LazyInitializer.java:64)
 at net.sf.hibernate.proxy.LazyInitializer.getImplementation(LazyInitializer.java:164)
...
Caused by: java.sql.SQLException: \[Microsoft\]\[SQLServer 2000 Driver for JDBC\]ResultSet can not re-read row data for column 2.
 at com.microsoft.jdbc.base.BaseExceptions.createException(Unknown Source)
 at com.microsoft.jdbc.base.BaseExceptions.getException(Unknown Source)
 at com.microsoft.jdbc.base.BaseResultSet.validateColumnIndex(Unknown Source)
 at com.microsoft.jdbc.base.BaseResultSet.getLong(Unknown Source)
 at com.microsoft.jdbc.base.BaseResultSet.getLong(Unknown Source)
 at org.jboss.resource.adapter.jdbc.WrappedResultSet.getLong(WrappedResultSet.java:338)
 at net.sf.hibernate.type.LongType.get(LongType.java:18)
 at net.sf.hibernate.type.NullableType.nullSafeGet(NullableType.java:62)
 at net.sf.hibernate.type.NullableType.nullSafeGet(NullableType.java:53)
 at net.sf.hibernate.type.ManyToOneType.hydrate(ManyToOneType.java:61)
 at net.sf.hibernate.loader.Loader.hydrate(Loader.java:686)
 at net.sf.hibernate.loader.Loader.loadFromResultSet(Loader.java:627)
 at net.sf.hibernate.loader.Loader.instanceNotYetLoaded(Loader.java:586)
 at net.sf.hibernate.loader.Loader.getRow(Loader.java:501)
 at net.sf.hibernate.loader.Loader.getRowFromResultSet(Loader.java:213)
 at net.sf.hibernate.loader.Loader.doQuery(Loader.java:281)
 at net.sf.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:133)
 at net.sf.hibernate.loader.Loader.loadEntity(Loader.java:911)
 at net.sf.hibernate.loader.Loader.loadEntity(Loader.java:931)
 at net.sf.hibernate.loader.EntityLoader.load(EntityLoader.java:59)
 at net.sf.hibernate.loader.EntityLoader.load(EntityLoader.java:51)
 at net.sf.hibernate.persister.EntityPersister.load(EntityPersister.java:419)

We use a component in Confluence called Hibernate. According to Hibernate Documentation there may be issues with the Microsoft JDBC drivers resulting in the error you are seeing.

Solution
Upgrade to jTDS driver 1.0.3 or later from the jTDS download page. You may also consider alternative SQL Server drivers listed on the Hibernate page.

Configuring Database Character Encoding

When creating the database schema, the database character encoding must be compatible with the application and Confluence character encoding as described in Configuring Database Character Encoding. If setting MS SQL to use UTF-8 is not an option, you can create a schema that uses UCS-2 encoding, and have the application and Confluence use UTF-8 encoding.

Setting up Authentication Mode of the SQL Server

During a login process to SQL server or while setting up your DB connection through the Setup Wizard, the following error may appear:

Login failed for user 'username'. The user is not associated with a trusted SQL Server connection. (Microsoft SQL Server, Error: 18452).

The cause of this error is that the SQL server has been configured to operate in 'Windows Authentication Mode (Windows Authentication)' and doesn't allow
the use of SQL accounts.

In order to resolve this problem, change the Authentication Mode of the SQL server from 'Windows Authentication Mode (Windows Authentication)'
to 'Mixed Mode (Windows Authentication and SQL Server Authentication)'.

Please refer to this Microsoft document for more details.

Moving from One SQL Server to Another

In Confluence, tables are created using the database login name for the schema instead of the dbo owner. Thus when moving data from one SQL server to another, you need to ensure that you create the database user first before exporting and importing the data, otherwise that user cannot be created.

To illustrate with an example, say the Confluence database login was 'confuser' — the table names would be 'confuser.table1', 'confuser.table2', etc.

When performing a database backup and restore into the new server, the tables will still be 'confuser.table1' and 'confuser.table2'.

However, if the database user 'confuser' has not been created first on the new server then you can no longer access the tables with a login of 'confuser', because although the database was copied across, the login object was not. You cannot create the login with the same name at this stage as it will complain that there is already an object in the database with the same name (the user object).

Known Issues

Database Deadlock on Microsoft SQL Server
Invalid Username or Password when Delegating User Management to use JIRA Logins
Invalid object name hibernate_unique_key due to Invalid Table Name
Unicode Characters Not Supported By Default
Use jTDS Driver 1.0.3 or Later for JDBC Connection
Document generated by Confluence on Nov 05, 2009 23:34