Confluence Docs 3.0 : Known Issues For SQL Server
This page last changed on Sep 17, 2009 by jlargman.
Database Deadlock IssuesProblem When creating a page, you may encounter deadlocks and get the following error: 2008-11-11 17:01:21,355 ERROR [http-8080-Processor6] [sf.hibernate.util.JDBCExceptionReporter] logExceptions Transaction (Process ID 55) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. -- url: /confluence/pages/docreatepage.action | userName: tstcreator3 | action: docreatepage 2008-11-11 17:01:21,355 ERROR [http-8080-Processor6] [sf.hibernate.util.JDBCExceptionReporter] logExceptions Transaction (Process ID 55) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. -- url: /confluence/pages/docreatepage.action | userName: tstcreator3 | action: docreatepage 2008-11-11 17:01:21,357 ERROR [http-8080-Processor6] [sf.hibernate.impl.SessionImpl] execute Could not synchronize database state with session -- url: /confluence/pages/docreatepage.action | userName: tstcreator3 | action: docreatepage 2008-11-11 17:01:21,357 ERROR [http-8080-Processor6] [sf.hibernate.impl.SessionImpl] execute Could not synchronize database state with session -- url: /confluence/pages/docreatepage.action | userName: tstcreator3 | action: docreatepage 2008-11-11 17:01:21,359 ERROR [http-8080-Processor6] [atlassian.xwork.interceptors.XWorkTransactionInterceptor] onThrowable Invoking rollback for transaction on action '/pages/docreatepage.action (CreatePageAction.doAdd())' due to throwable: org.springframework.dao.ConcurrencyFailureException: Hibernate operation: could not delete: [com.atlassian.confluence.pages.Draft#14319839]; SQL []; Transaction (Process ID 55) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.; nested exception is java.sql.SQLException: Transaction (Process ID 55) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. -- url: /confluence/pages/docreatepage.action | userName: tstcreator3 org.springframework.dao.ConcurrencyFailureException: Hibernate operation: could not delete: [com.atlassian.confluence.pages.Draft#14319839]; SQL []; Transaction (Process ID 55) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.; nested exception is java.sql.SQLException: Transaction (Process ID 55) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. Caused by: java.sql.SQLException: Transaction (Process ID 55) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. at net.sourceforge.jtds.jdbc.SQLDiagnostic.addDiagnostic(SQLDiagnostic.java:367) at net.sourceforge.jtds.jdbc.TdsCore.tdsErrorToken(TdsCore.java:2816) at net.sourceforge.jtds.jdbc.TdsCore.nextToken(TdsCore.java:2254) at net.sourceforge.jtds.jdbc.TdsCore.getMoreResults(TdsCore.java:631) at net.sourceforge.jtds.jdbc.JtdsStatement.processResults(JtdsStatement.java:584) at net.sourceforge.jtds.jdbc.JtdsStatement.executeSQL(JtdsStatement.java:546) at net.sourceforge.jtds.jdbc.JtdsPreparedStatement.executeUpdate(JtdsPreparedStatement.java:505) at com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeUpdate(NewProxyPreparedStatement.java:105) at net.sf.hibernate.impl.NonBatchingBatcher.addToBatch(NonBatchingBatcher.java:22) at net.sf.hibernate.persister.EntityPersister.delete(EntityPersister.java:581) at net.sf.hibernate.impl.ScheduledDeletion.execute(ScheduledDeletion.java:29) at net.sf.hibernate.impl.SessionImpl.execute(SessionImpl.java:2449) at net.sf.hibernate.impl.SessionImpl.executeAll(SessionImpl.java:2435) at net.sf.hibernate.impl.SessionImpl.execute(SessionImpl.java:2397) at net.sf.hibernate.impl.SessionImpl.flush(SessionImpl.java:2261) at net.sf.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:61) This problem will be more prominent under high load environments when there are more concurrent page creates/edits. Reason SQL Server is escalating row locks (in this case deleting a draft from the CONTENT table) to table locks and stopping other transactions from accessing the table. Solution Configure your database to use the isolation level, Read Committed with Row Versioning. You can do this by executing the following query: ALTER DATABASE <database name> SET READ_COMMITTED_SNAPSHOT ON WITH ROLLBACK IMMEDIATE; More information on SQL Server isolation levels can be found in the Microsoft documentation. From version 2.10.x onwards, indexes will be automatically created upon performing an upgrade. CREATE NONCLUSTERED INDEX [c_pageid_idx] ON [dbo].[CONTENT] ( [PAGEID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] Unicode Characters Not Supported By DefaultProblem Reason Solution Enable Unicode SQL Server dialect on a new setup, perform these steps prior to 'Stage 3 - Database Connection Setup' of the Database Setup For Any External Database:
For existing SQL Server instances wishing to enable Unicode support:
This will not restore previously stored Unicode characters. SQL Server 2005 and Case-Sensitive Schema Objects.A default installation of SQL Server 2005 on a Windows system configured for English will be case sensitive for schema objects. That is to say that a table called 'CONTENT' is not the same as a table called 'content'. Confluence will not run correctly in this case. During installation of SQL Server, be sure to choose a case-insensitive schema. Case-Sensitive Collation
Case-sensitive or case-insensitive collation — how should you create your Confluence database? What about when you are migrating your existing Confluence instance from one database to another? Setting up a New Confluence InstanceFor new Confluence instances, we recommend using case-sensitive collation for your Confluence database, which is the default collation type used by many database systems. The Confluence application itself reduces all usernames into lower-case characters before they are stored in the Confluence database. Therefore, 'joebloggs', 'joeBloggs', 'JoeBloggs', etc. will be treated as the same username on a Confluence installation with case-sensitive database collation. Migrating an Existing Confluence Instance to a Different DatabaseThe default Confluence Standalone configuration uses case-sensitive database collation. This is often the case with databases on several other systems which were created under default conditions. Therefore, if you are migrating from this type of configuration to a new database, we recommend that the new database uses case-sensitive collation. If you use case-insensitive collation, you may encounter data integrity problems after migration (for example, via an XML import) if data stored within your original Confluence site required case-sensitive distinctions. Information on different collation options in SQL Server 2005 can be found in the Microsoft documentation.Use jTDS Driver 1.0.3 or Later for JDBC Connection - Reason 1Problem \[ERROR\] ActionSupport - \-An error occured while storing the requested page\! <org.springframework.jdbc.UncategorizedSQLException: (Hibernate operation): encountered SQLException \[The amount of data read from the stream is not = length.\]; nested exception is java.sql.SQLException: The amount of data read from the stream is not = length.> This can occur with jTDS driver 1.0.2 but is fixed in 1.0.3 - see the jTDS homepage . It will prevent backups from suceeding and lock access to the database when viewing certain pages. Solution Use jTDS Driver 1.0.3 or Later for JDBC Connection - Reason 2Problem net.sf.hibernate.LazyInitializationException: Exception initializing proxy: \[com.atlassian.confluence.spaces.SpaceDescription#5\] at net.sf.hibernate.proxy.LazyInitializer.initializeWrapExceptions(LazyInitializer.java:64) at net.sf.hibernate.proxy.LazyInitializer.getImplementation(LazyInitializer.java:164) ... Caused by: java.sql.SQLException: \[Microsoft\]\[SQLServer 2000 Driver for JDBC\]ResultSet can not re-read row data for column 2. at com.microsoft.jdbc.base.BaseExceptions.createException(Unknown Source) at com.microsoft.jdbc.base.BaseExceptions.getException(Unknown Source) at com.microsoft.jdbc.base.BaseResultSet.validateColumnIndex(Unknown Source) at com.microsoft.jdbc.base.BaseResultSet.getLong(Unknown Source) at com.microsoft.jdbc.base.BaseResultSet.getLong(Unknown Source) at org.jboss.resource.adapter.jdbc.WrappedResultSet.getLong(WrappedResultSet.java:338) at net.sf.hibernate.type.LongType.get(LongType.java:18) at net.sf.hibernate.type.NullableType.nullSafeGet(NullableType.java:62) at net.sf.hibernate.type.NullableType.nullSafeGet(NullableType.java:53) at net.sf.hibernate.type.ManyToOneType.hydrate(ManyToOneType.java:61) at net.sf.hibernate.loader.Loader.hydrate(Loader.java:686) at net.sf.hibernate.loader.Loader.loadFromResultSet(Loader.java:627) at net.sf.hibernate.loader.Loader.instanceNotYetLoaded(Loader.java:586) at net.sf.hibernate.loader.Loader.getRow(Loader.java:501) at net.sf.hibernate.loader.Loader.getRowFromResultSet(Loader.java:213) at net.sf.hibernate.loader.Loader.doQuery(Loader.java:281) at net.sf.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:133) at net.sf.hibernate.loader.Loader.loadEntity(Loader.java:911) at net.sf.hibernate.loader.Loader.loadEntity(Loader.java:931) at net.sf.hibernate.loader.EntityLoader.load(EntityLoader.java:59) at net.sf.hibernate.loader.EntityLoader.load(EntityLoader.java:51) at net.sf.hibernate.persister.EntityPersister.load(EntityPersister.java:419) We use a component in Confluence called Hibernate. According to Hibernate Documentation there may be issues with the Microsoft JDBC drivers resulting in the error you are seeing. Solution Configuring Database Character EncodingWhen creating the database schema, the database character encoding must be compatible with the application and Confluence character encoding as described in Configuring Database Character Encoding. If setting MS SQL to use UTF-8 is not an option, you can create a schema that uses UCS-2 encoding, and have the application and Confluence use UTF-8 encoding. Setting up Authentication Mode of the SQL ServerDuring a login process to SQL server or while setting up your DB connection through the Setup Wizard, the following error may appear: Login failed for user 'username'. The user is not associated with a trusted SQL Server connection. (Microsoft SQL Server, Error: 18452). The cause of this error is that the SQL server has been configured to operate in 'Windows Authentication Mode (Windows Authentication)' and doesn't allow In order to resolve this problem, change the Authentication Mode of the SQL server from 'Windows Authentication Mode (Windows Authentication)' Please refer to this Microsoft document for more details. Moving from One SQL Server to AnotherIn Confluence, tables are created using the database login name for the schema instead of the dbo owner. Thus when moving data from one SQL server to another, you need to ensure that you create the database user first before exporting and importing the data, otherwise that user cannot be created. To illustrate with an example, say the Confluence database login was 'confuser' — the table names would be 'confuser.table1', 'confuser.table2', etc. When performing a database backup and restore into the new server, the tables will still be 'confuser.table1' and 'confuser.table2'. However, if the database user 'confuser' has not been created first on the new server then you can no longer access the tables with a login of 'confuser', because although the database was copied across, the login object was not. You cannot create the login with the same name at this stage as it will complain that there is already an object in the database with the same name (the user object). Known Issues |
![]() |
Document generated by Confluence on Nov 05, 2009 23:34 |