Known Limitations in CockroachDB v21.1

On this page Carat arrow pointing down

This page describes newly identified limitations in the CockroachDB v21.1.21 release as well as unresolved limitations identified in earlier releases.

New limitations

CockroachDB does not properly optimize some left and anti joins with GIN indexes

Left joins and anti joins involving JSONB, ARRAY, or spatial-typed columns with a multi-column or partitioned GIN index will not take advantage of the index if the prefix columns of the index are unconstrained, or if they are constrained to multiple, constant values.

To work around this limitation, make sure that the prefix columns of the index are either constrained to single constant values, or are part of an equality condition with an input column (e.g., col1 = col2, where col1 is a prefix column and col2 is an input column).

For example, suppose you have the following multi-region database and tables:

CREATE DATABASE multi_region_test_db PRIMARY REGION "europe-west1" REGIONS "us-west1", "us-east1" SURVIVE REGION FAILURE;
USE multi_region_test_db;

CREATE TABLE t1 (
  k INT PRIMARY KEY,
  geom GEOMETRY
);

CREATE TABLE t2 (
  k INT PRIMARY KEY,
  geom GEOMETRY,
  INVERTED INDEX geom_idx (geom)
) LOCALITY REGIONAL BY ROW;

And you insert some data into the tables:

INSERT INTO t1 SELECT generate_series(1, 1000), 'POINT(1.0 1.0)';
INSERT INTO t2 (crdb_region, k, geom) SELECT 'us-east1', generate_series(1, 1000), 'POINT(1.0 1.0)';
INSERT INTO t2 (crdb_region, k, geom) SELECT 'us-west1', generate_series(1001, 2000), 'POINT(2.0 2.0)';
INSERT INTO t2 (crdb_region, k, geom) SELECT 'europe-west1', generate_series(2001, 3000), 'POINT(3.0 3.0)';

If you attempt a left join between t1 and t2 on only the geometry columns, CockroachDB will not be able to plan an inverted join:

> EXPLAIN SELECT * FROM t1 LEFT JOIN t2 ON st_contains(t1.geom, t2.geom);
                info
------------------------------------
  distribution: full
  vectorized: true

  • cross join (right outer)
  │ pred: st_contains(geom, geom)
  │
  ├── • scan
  │     estimated row count: 3,000
  │     table: t2@primary
  │     spans: FULL SCAN
  │
  └── • scan
        estimated row count: 1,000
        table: t1@primary
        spans: FULL SCAN
(15 rows)

However, if you constrain the crdb_region column to a single value, CockroachDB can plan an inverted join:

> EXPLAIN SELECT * FROM t1 LEFT JOIN t2 ON st_contains(t1.geom, t2.geom) AND t2.crdb_region = 'us-east1';
                       info
--------------------------------------------------
  distribution: full
  vectorized: true

  • lookup join (left outer)
  │ table: t2@primary
  │ equality: (crdb_region, k) = (crdb_region,k)
  │ equality cols are key
  │ pred: st_contains(geom, geom)
  │
  └── • inverted join (left outer)
      │ table: t2@geom_idx
      │
      └── • render
          │
          └── • scan
                estimated row count: 1,000
                table: t1@primary
                spans: FULL SCAN
(18 rows)

If you do not know which region to use, you can combine queries with UNION ALL:

> EXPLAIN SELECT * FROM t1 LEFT JOIN t2 ON st_contains(t1.geom, t2.geom) AND t2.crdb_region = 'us-east1'
UNION ALL SELECT * FROM t1 LEFT JOIN t2 ON st_contains(t1.geom, t2.geom) AND t2.crdb_region = 'us-west1'
UNION ALL SELECT * FROM t1 LEFT JOIN t2 ON st_contains(t1.geom, t2.geom) AND t2.crdb_region = 'europe-west1';
                           info
----------------------------------------------------------
  distribution: full
  vectorized: true

  • union all
  │
  ├── • union all
  │   │
  │   ├── • lookup join (left outer)
  │   │   │ table: t2@primary
  │   │   │ equality: (crdb_region, k) = (crdb_region,k)
  │   │   │ equality cols are key
  │   │   │ pred: st_contains(geom, geom)
  │   │   │
  │   │   └── • inverted join (left outer)
  │   │       │ table: t2@geom_idx
  │   │       │
  │   │       └── • render
  │   │           │
  │   │           └── • scan
  │   │                 estimated row count: 1,000
  │   │                 table: t1@primary
  │   │                 spans: FULL SCAN
  │   │
  │   └── • lookup join (left outer)
  │       │ table: t2@primary
  │       │ equality: (crdb_region, k) = (crdb_region,k)
  │       │ equality cols are key
  │       │ pred: st_contains(geom, geom)
  │       │
  │       └── • inverted join (left outer)
  │           │ table: t2@geom_idx
  │           │
  │           └── • render
  │               │
  │               └── • scan
  │                     estimated row count: 1,000
  │                     table: t1@primary
  │                     spans: FULL SCAN
  │
  └── • lookup join (left outer)
      │ table: t2@primary
      │ equality: (crdb_region, k) = (crdb_region,k)
      │ equality cols are key
      │ pred: st_contains(geom, geom)
      │
      └── • inverted join (left outer)
          │ table: t2@geom_idx
          │
          └── • render
              │
              └── • scan
                    estimated row count: 1,000
                    table: t1@primary
                    spans: FULL SCAN
(54 rows)

Tracking GitHub Issue

SET does not ROLLBACK in a transaction

SET does not properly apply ROLLBACK within a transaction. For example, in the following transaction, showing the TIME ZONE variable does not return 2 as expected after the rollback:

SET TIME ZONE +2;
BEGIN;
SET TIME ZONE +3;
ROLLBACK;
SHOW TIME ZONE;
timezone
------------
3

Tracking GitHub Issue

Unresolved limitations

Optimizer stale statistics deletion when columns are dropped

  • When a column is dropped from a multi-column index, the optimizer will not collect new statistics for the deleted column. However, the optimizer never deletes the old multi-column statistics. This can cause a buildup of statistics in system.table_statistics leading the optimizer to use stale statistics, which could result in sub-optimal plans. To workaround this issue and avoid these scenarios, explicitly delete those statistics from the system.table_statistics table.

    Tracking GitHub Issue

  • Single-column statistics are not deleted when columns are dropped, which could cause minor performance issues.

    Tracking GitHub Issue

Automatic statistics refresher may not refresh after upgrade

The automatic statistics refresher automatically checks whether it needs to refresh statistics for every table in the database upon startup of each node in the cluster. If statistics for a table have not been refreshed in a while, this will trigger collection of statistics for that table. If statistics have been refreshed recently, it will not force a refresh. As a result, the automatic statistics refresher does not necessarily perform a refresh of statistics after an upgrade. This could cause a problem, for example, if the upgrade moves from a version without histograms to a version with histograms. To refresh statistics manually, use CREATE STATISTICS.

Tracking GitHub Issue

IMPORT into a REGIONAL BY ROW table

CockroachDB does not currently support IMPORTs into REGIONAL BY ROW tables that are part of multi-region databases.

Tracking GitHub Issue

To work around this limitation, you will need to take the following steps:

  1. In the source database, export the crdb_region column separately when exporting your data.

    icon/buttons/copy
    EXPORT INTO CSV 'nodelocal://0/src_rbr' FROM SELECT crdb_region, i from src_rbr;
    

    For more information about the syntax, see EXPORT.

  2. In the destination database, create a table that has a crdb_region column of the right type as shown below.

    icon/buttons/copy
    CREATE TABLE dest_rbr (crdb_region public.crdb_internal_region NOT NULL, i INT);
    
  3. Import the data (including the crdb_region column explicitly) using IMPORT INTO:

    icon/buttons/copy
    IMPORT INTO dest_rbr (crdb_region, i) CSV DATA ('nodelocal://0/src_rbr/export*.csv')
    
  4. Convert the destination table to REGIONAL BY ROW using ALTER TABLE ... ALTER COLUMN and ALTER TABLE ... SET LOCALITY:

    icon/buttons/copy
    ALTER TABLE dest_rbr ALTER COLUMN crdb_region SET DEFAULT default_to_database_primary_region(gateway_region())::public.crdb_internal_region;
    
    icon/buttons/copy
    ALTER TABLE dest_rbr SET LOCALITY REGIONAL BY ROW AS crdb_region;
    

BACKUP of multi-region tables

CockroachDB does not currently support BACKUPs of individual tables that are part of multi-region databases. For example, you cannot backup a GLOBAL or REGIONAL table individually.

To work around this limitation, you must back up the database or the entire cluster.

Tracking GitHub Issue

Differences in syntax and behavior between CockroachDB and PostgreSQL

CockroachDB supports the PostgreSQL wire protocol and the majority of its syntax. However, CockroachDB does not support some of the PostgreSQL features or behaves differently from PostgreSQL because not all features can be easily implemented in a distributed system.

For a list of known differences in syntax and behavior between CockroachDB and PostgreSQL, see Features that differ from PostgreSQL.

Multiple arbiter indexes for INSERT ON CONFLICT DO UPDATE

CockroachDB does not currently support multiple arbiter indexes for INSERT ON CONFLICT DO UPDATE, and will return an error if there are multiple unique or exclusion constraints matching the ON CONFLICT DO UPDATE specification.

Tracking GitHub Issue

IMPORT into a table with partial indexes

CockroachDB does not currently support IMPORTs into tables with partial indexes.

To work around this limitation:

  1. Drop any partial indexes defined on the table.
  2. Perform the IMPORT.
  3. Recreate the partial indexes.

If you are performing an IMPORT of a PGDUMP with partial indexes:

  1. Drop the partial indexes on the PostgreSQL server.
  2. Recreate the PGDUMP.
  3. IMPORT the PGDUMP.
  4. Add partial indexes on the CockroachDB server.

Tracking GitHub Issue

Spatial support limitations

CockroachDB supports efficiently storing and querying spatial data, with the following limitations:

Subqueries in SET statements

It is not currently possible to use a subquery in a SET or SET CLUSTER SETTING statement. For example:

icon/buttons/copy
> SET application_name = (SELECT 'a' || 'b');
ERROR: invalid value for parameter "application_name": "(SELECT 'a' || 'b')"
SQLSTATE: 22023
DETAIL: subqueries are not allowed in SET

Tracking GitHub Issue

Enterprise BACKUP does not capture database/table/column comments

The COMMENT ON statement associates comments to databases, tables, or columns. However, the internal table (system.comments) in which these comments are stored is not captured by a BACKUP of a table or database.

As a workaround, take a cluster backup instead, as the system.comments table is included in cluster backups.

Tracking GitHub Issue

Change data capture

Change data capture (CDC) provides efficient, distributed, row-level change feeds into Apache Kafka for downstream processing such as reporting, caching, or full-text indexing. It has the following known limitations:

DB Console may become inaccessible for secure clusters

Accessing the DB Console for a secure cluster now requires login information (i.e., username and password). This login information is stored in a system table that is replicated like other data in the cluster. If a majority of the nodes with the replicas of the system table data go down, users will be locked out of the DB Console.

AS OF SYSTEM TIME in SELECT statements

AS OF SYSTEM TIME can only be used in a top-level SELECT statement. That is, we do not support statements like INSERT INTO t SELECT * FROM t2 AS OF SYSTEM TIME <time> or two subselects in the same statement with differing AS OF SYSTEM TIME arguments.

Tracking GitHub Issue

Large index keys can impair performance

The use of tables with very large primary or secondary index keys (>32KB) can result in excessive memory usage. Specifically, if the primary or secondary index key is larger than 32KB the default indexing scheme for storage engine SSTables breaks down and causes the index to be excessively large. The index is pinned in memory by default for performance.

To work around this issue, we recommend limiting the size of primary and secondary keys to 4KB, which you must account for manually. Note that most columns are 8B (exceptions being STRING and JSON), which still allows for very complex key structures.

Tracking GitHub Issue

Using LIKE...ESCAPE in WHERE and HAVING constraints

CockroachDB tries to optimize most comparisons operators in WHERE and HAVING clauses into constraints on SQL indexes by only accessing selected rows. This is done for LIKE clauses when a common prefix for all selected rows can be determined in the search pattern (e.g., ... LIKE 'Joe%'). However, this optimization is not yet available if the ESCAPE keyword is also used.

Tracking GitHub Issue

TRUNCATE does not behave like DELETE

TRUNCATE is not a DML statement, but instead works as a DDL statement. Its limitations are the same as other DDL statements, which are outlined in Online Schema Changes: Limitations

Tracking GitHub Issue

Ordering tables by JSONB/JSON-typed columns

CockroachDB does not currently key-encode JSON values. As a result, tables cannot be ordered by JSONB/JSON-typed columns.

Tracking GitHub Issue

Current sequence value not checked when updating min/max value

Altering the minimum or maximum value of a series does not check the current value of a series. This means that it is possible to silently set the maximum to a value less than, or a minimum value greater than, the current value.

Tracking GitHub Issue

Using default_int_size session variable in batch of statements

When setting the default_int_size session variable in a batch of statements such as SET default_int_size='int4'; SELECT 1::IN, the default_int_size variable will not take affect until the next statement. This happens because statement parsing takes place asynchronously from statement execution.

As a workaround, set default_int_size via your database driver, or ensure that SET default_int_size is in its own statement.

Tracking GitHub Issue

COPY FROM statements are not supported in the CockroachDB SQL shell

The built-in SQL shell provided with CockroachDB (cockroach sql / cockroach demo) does not currently support importing data with the COPY statement.

To load data into CockroachDB, we recommend that you use an IMPORT. If you must use a COPY statement, you can issue the statement from the psql client command provided with PostgreSQL, or from another third-party client.

Tracking GitHub Issue

COPY syntax not supported by CockroachDB

CockroachDB does not yet support the following COPY syntax:

Import with a high amount of disk contention

IMPORT can sometimes fail with a "context canceled" error, or can restart itself many times without ever finishing. If this is happening, it is likely due to a high amount of disk contention. This can be mitigated by setting the kv.bulk_io_write.max_rate cluster setting to a value below your max disk write speed. For example, to set it to 10MB/s, execute:

icon/buttons/copy
> SET CLUSTER SETTING kv.bulk_io_write.max_rate = '10MB';

Placeholders in PARTITION BY

When defining a table partition, either during table creation or table alteration, it is not possible to use placeholders in the PARTITION BY clause.

Tracking GitHub Issue

Dropping a single partition

CockroachDB does not currently support dropping a single partition from a table. In order to remove partitions, you can repartition the table.

Adding a column with sequence-based DEFAULT values

It is currently not possible to add a column to a table when the column uses a sequence as the DEFAULT value, for example:

icon/buttons/copy
> CREATE TABLE t (x INT);
icon/buttons/copy
> INSERT INTO t(x) VALUES (1), (2), (3);
icon/buttons/copy
> CREATE SEQUENCE s;
icon/buttons/copy
> ALTER TABLE t ADD COLUMN y INT DEFAULT nextval('s');
ERROR: nextval(): unimplemented: cannot evaluate scalar expressions containing sequence operations in this context
SQLSTATE: 0A000

Tracking GitHub Issue

Available capacity metric in the DB Console

If you are testing your deployment locally with multiple CockroachDB nodes running on a single machine (this is not recommended in production), you must explicitly set the store size per node in order to display the correct capacity. Otherwise, the machine's actual disk capacity will be counted as a separate store for each node, thus inflating the computed capacity.

Schema changes within transactions

Within a single transaction:

  • DDL statements cannot be mixed with DML statements. As a workaround, you can split the statements into separate transactions. For more details, see examples of unsupported statements.
  • As of version v2.1, you can run schema changes inside the same transaction as a CREATE TABLE statement. For more information, see this example.
  • A CREATE TABLE statement containing FOREIGN KEY or INTERLEAVE clauses cannot be followed by statements that reference the new table.
  • Database, schema, table, and user-defined type names cannot be reused. For example, you cannot drop a table named a and then create (or rename) a different table with the name a. Similarly, you cannot rename a database named a to b and then create (or rename) a different database with the name a. As a workaround, split RENAME TO, DROP, and CREATE statements that reuse object names into separate transactions.
  • Schema change DDL statements inside a multi-statement transaction can fail while other statements succeed.
  • As of v19.1, some schema changes can be used in combination in a single ALTER TABLE statement. For a list of commands that can be combined, see ALTER TABLE. For a demonstration, see Add and rename columns atomically.
  • DROP COLUMN can result in data loss if one of the other schema changes in the transaction fails or is canceled. To work around this, move the DROP COLUMN statement to its own explicit transaction or run it in a single statement outside the existing transaction.
Note:

If a schema change within a transaction fails, manual intervention may be needed to determine which has failed. After determining which schema change(s) failed, you can then retry the schema changes.

Schema change DDL statements inside a multi-statement transaction can fail while other statements succeed

Schema change DDL statements that run inside a multi-statement transaction with non-DDL statements can fail at COMMIT time, even if other statements in the transaction succeed. This leaves such transactions in a "partially committed, partially aborted" state that may require manual intervention to determine whether the DDL statements succeeded.

If such a failure occurs, CockroachDB will emit a new CockroachDB-specific error code, XXA00, and the following error message:

transaction committed but schema change aborted with error: <description of error>
HINT: Some of the non-DDL statements may have committed successfully, but some of the DDL statement(s) failed.
Manual inspection may be required to determine the actual state of the database.
Note:

This limitation exists in versions of CockroachDB prior to 19.2. In these older versions, CockroachDB returned the Postgres error code 40003, "statement completion unknown".

Warning:

If you must execute schema change DDL statements inside a multi-statement transaction, we strongly recommend checking for this error code and handling it appropriately every time you execute such transactions.

This error will occur in various scenarios, including but not limited to:

  • Creating a unique index fails because values aren't unique.
  • The evaluation of a computed value fails.
  • Adding a constraint (or a column with a constraint) fails because the constraint is violated for the default/computed values in the column.

To see an example of this error, start by creating the following table.

icon/buttons/copy
CREATE TABLE T(x INT);
INSERT INTO T(x) VALUES (1), (2), (3);

Then, enter the following multi-statement transaction, which will trigger the error.

icon/buttons/copy
BEGIN;
ALTER TABLE t ADD CONSTRAINT unique_x UNIQUE(x);
INSERT INTO T(x) VALUES (3);
COMMIT;
pq: transaction committed but schema change aborted with error: (23505): duplicate key value (x)=(3) violates unique constraint "unique_x"
HINT: Some of the non-DDL statements may have committed successfully, but some of the DDL statement(s) failed.
Manual inspection may be required to determine the actual state of the database.

In this example, the INSERT statement committed, but the ALTER TABLE statement adding a UNIQUE constraint failed. We can verify this by looking at the data in table t and seeing that the additional non-unique value 3 was successfully inserted.

icon/buttons/copy
SELECT * FROM t;
  x
+---+
  1
  2
  3
  3
(4 rows)

Schema changes between executions of prepared statements

When the schema of a table targeted by a prepared statement changes after the prepared statement is created, future executions of the prepared statement could result in an error. For example, adding a column to a table referenced in a prepared statement with a SELECT * clause will result in an error:

icon/buttons/copy
CREATE TABLE users (id INT PRIMARY KEY);
icon/buttons/copy
PREPARE prep1 AS SELECT * FROM users;
icon/buttons/copy
ALTER TABLE users ADD COLUMN name STRING;
icon/buttons/copy
INSERT INTO users VALUES (1, 'Max Roach');
icon/buttons/copy
EXECUTE prep1;
ERROR: cached plan must not change result type
SQLSTATE: 0A000

It's therefore recommended to explicitly list result columns instead of using SELECT * in prepared statements, when possible.

Size limits on statement input from SQL clients

CockroachDB imposes a hard limit of 16MiB on the data input for a single statement passed to CockroachDB from a client (including the SQL shell). We do not recommend attempting to execute statements from clients with large input.

Using \| to perform a large input in the SQL shell

In the built-in SQL shell, using the \| operator to perform a large number of inputs from a file can cause the server to close the connection. This is because \| sends the entire file as a single query to the server, which can exceed the upper bound on the size of a packet the server can accept from any client (16MB).

As a workaround, execute the file from the command line with cat data.sql | cockroach sql instead of from within the interactive shell.

New values generated by DEFAULT expressions during ALTER TABLE ADD COLUMN

When executing an ALTER TABLE ADD COLUMN statement with a DEFAULT expression, new values generated:

  • use the default search path regardless of the search path configured in the current session via SET SEARCH_PATH.
  • use the UTC time zone regardless of the time zone configured in the current session via SET TIME ZONE.
  • have no default database regardless of the default database configured in the current session via SET DATABASE, so you must specify the database of any tables they reference.
  • use the transaction timestamp for the statement_timestamp() function regardless of the time at which the ALTER statement was issued.

Load-based lease rebalancing in uneven latency deployments

When nodes are started with the --locality flag, CockroachDB attempts to place the replica lease holder (the replica that client requests are forwarded to) on the node closest to the source of the request. This means as client requests move geographically, so too does the replica lease holder.

However, you might see increased latency caused by a consistently high rate of lease transfers between datacenters in the following case:

  • Your cluster runs in datacenters which are very different distances away from each other.
  • Each node was started with a single tier of --locality, e.g., --locality=datacenter=a.
  • Most client requests get sent to a single datacenter because that's where all your application traffic is.

To detect if this is happening, open the DB Console, select the Queues dashboard, hover over the Replication Queue graph, and check the Leases Transferred / second data point. If the value is consistently larger than 0, you should consider stopping and restarting each node with additional tiers of locality to improve request latency.

For example, let's say that latency is 10ms from nodes in datacenter A to nodes in datacenter B but is 100ms from nodes in datacenter A to nodes in datacenter C. To ensure A's and B's relative proximity is factored into lease holder rebalancing, you could restart the nodes in datacenter A and B with a common region, --locality=region=foo,datacenter=a and --locality=region=foo,datacenter=b, while restarting nodes in datacenter C with a different region, --locality=region=bar,datacenter=c.

Overload resolution for collated strings

Many string operations are not properly overloaded for collated strings, for example:

icon/buttons/copy
> SELECT 'string1' || 'string2';
     ?column?
------------------
  string1string2
(1 row)
icon/buttons/copy
> SELECT ('string1' collate en) || ('string2' collate en);
pq: unsupported binary operator: <collatedstring{en}> || <collatedstring{en}>

Tracking GitHub Issue

Max size of a single column family

When creating or updating a row, if the combined size of all values in a single column family exceeds the max range size (512 MiB by default) for the table, the operation may fail, or cluster performance may suffer.

As a workaround, you can either manually split a table's columns into multiple column families, or you can create a table-specific zone configuration with an increased max range size.

Simultaneous client connections and running queries on a single node

When a node has both a high number of client connections and running queries, the node may crash due to memory exhaustion. This is due to CockroachDB not accurately limiting the number of clients and queries based on the amount of available RAM on the node.

To prevent memory exhaustion, monitor each node's memory usage and ensure there is some margin between maximum CockroachDB memory usage and available system RAM. For more details about memory usage in CockroachDB, see this blog post.

Privileges for DELETE and UPDATE

Every DELETE or UPDATE statement constructs a SELECT statement, even when no WHERE clause is involved. As a result, the user executing DELETE or UPDATE requires both the DELETE and SELECT or UPDATE and SELECT privileges on the table.

Correlated common table expressions

CockroachDB does not support correlated common table expressions. This means that a CTE cannot refer to a variable defined outside the scope of that CTE.

For example, the following query returns an error:

icon/buttons/copy
> SELECT * FROM users
  WHERE id =
    (WITH rides_home AS
      (SELECT revenue FROM rides
       WHERE end_address = address)
     SELECT rider_id FROM rides_home);
ERROR: CTEs may not be correlated
SQLSTATE: 0A000

This query returns an error because the WITH rides_home clause references a column (address) returned by the SELECT statement at the top level of the query, outside the rides_home CTE definition.

Tracking GitHub Issue

ROLLBACK TO SAVEPOINT in high-priority transactions containing DDL

Transactions with priority HIGH that contain DDL and ROLLBACK TO SAVEPOINT are not supported, as they could result in a deadlock. For example:

> BEGIN PRIORITY HIGH; SAVEPOINT s; CREATE TABLE t(x INT); ROLLBACK TO SAVEPOINT s;
ERROR: unimplemented: cannot use ROLLBACK TO SAVEPOINT in a HIGH PRIORITY transaction containing DDL
SQLSTATE: 0A000
HINT: You have attempted to use a feature that is not yet implemented.
See: https://github.com/cockroachdb/cockroach/issues/46414

Tracking GitHub Issue

Concurrent SQL shells overwrite each other's history

The built-in SQL shell stores its command history in a single file by default (.cockroachsql_history). When running multiple instances of the SQL shell on the same machine. Therefore, each shell's command history can get overwritten in unexpected ways.

As a workaround, set the COCKROACH_SQL_CLI_HISTORY environment variable to different values for the two different shells, for example:

icon/buttons/copy
$ export COCKROACH_SQL_CLI_HISTORY=.cockroachsql_history_shell_1
icon/buttons/copy
$ export COCKROACH_SQL_CLI_HISTORY=.cockroachsql_history_shell_2

Tracking GitHub Issue

Passwords with special characters cannot be passed in connection parameter

CockroachDB does not allow passwords with special characters to be passed as a connection parameter to cockroach commands.

Tracking GitHub Issue

CockroachDB does not test for all connection failure scenarios

CockroachDB servers rely on the network to report when a TCP connection fails. In most scenarios when a connection fails, the network immediately reports a connection failure, resulting in a Connection refused error.

However, if there is no host at the target IP address, or if a firewall rule blocks traffic to the target address and port, a TCP handshake can linger while the client network stack waits for a TCP packet in response to network requests. To work around this kind of scenario, we recommend the following:

  • When migrating a node to a new machine, keep the server listening at the previous IP address until the cluster has completed the migration.
  • Configure any active network firewalls to allow node-to-node traffic.
  • Verify that orchestration tools (e.g., Kubernetes) are configured to use the correct network connection information.

Tracking GitHub Issue

Some column-dropping schema changes do not roll back properly

Some schema changes that drop columns cannot be rolled back properly.

In some cases, the rollback will succeed, but the column data might be partially or totally missing, or stale due to the asynchronous nature of the schema change.

Tracking GitHub Issue

In other cases, the rollback will fail in such a way that will never be cleaned up properly, leaving the table descriptor in a state where no other schema changes can be run successfully.

Tracking GitHub Issue

To reduce the chance that a column drop will roll back incorrectly:

  • Perform column drops in transactions separate from other schema changes. This ensures that other schema change failures will not cause the column drop to be rolled back.

  • Drop all constraints (including unique indexes) on the column in a separate transaction, before dropping the column.

  • Drop any default values or computed expressions on a column before attempting to drop the column. This prevents conflicts between constraints and default/computed values during a column drop rollback.

If you think a rollback of a column-dropping schema change has occurred, check the jobs table. Schema changes with an error prefaced by cannot be reverted, manual cleanup may be required might require manual intervention.

Disk-spilling on joins with JSON columns

If the execution of a join query exceeds the limit set for memory-buffering operations (i.e., the value set for the sql.distsql.temp_storage.workmem cluster setting), CockroachDB will spill the intermediate results of computation to disk. If the join operation spills to disk, and at least one of the equality columns is of type JSON, CockroachDB returns the error unable to encode table key: *tree.DJSON. If the memory limit is not reached, then the query will be processed without error.

Tracking GitHub Issue

Disk-spilling not supported for some unordered distinct operations

Disk spilling isn't supported when running UPSERT statements that have nulls are distinct and error on duplicate markers. You can check this by using EXPLAIN and looking at the statement plan.

        ├── distinct                     |                     |
        │    │                           | distinct on         | ...
        │    │                           | nulls are distinct  |
        │    │                           | error on duplicate  |

Using interleaved tables in backups

Interleaved tables are now disabled by default in v21.1. Your backup will fail if your cluster includes interleaved data. To include interleaved tables, use the INCLUDE_DEPRECATED_INTERLEAVES option. Note that, interleaved tables will be permanently removed from CockroachDB in a future release, so you will be unable to RESTORE backups containing interleaved tables to any future versions.

Tracking GitHub Issue

GIN index scans can't be generated for some statement filters

CockroachDB cannot generate GIN index scans for statements with filters that have both JSON fetch values and containment operators. For example the following statement won't be index-accelerated:

SELECT * FROM mytable WHERE j->'a' @> '{"b": "c"}';

CockroachDB v20.1 and earlier would generate index scans for these filters, though it is not recommended as the normalization rules used to convert the filters into JSON containment expressions would sometimes produce inequivalent expressions.

The workaround is to rewrite the statement filters to avoid using both JSON fetch values and containment operators. The following statement is index-accelerated and equivalent to the non-accelerated statement above:

SELECT * FROM mytable WHERE j @> '{"a": {"b": "c"}}'

Tracking GitHub Issue

The optimizer won't plan locality optimized searches using unique indexes on virtual computed columns

CockroachDB will not plan locality optimized searches using unique indexes on virtual computed columns.

A workaround is to use stored computed columns.

Tracking GitHub Issue

Unique indexes on virtual computed columns can't be used with multi-region clusters

CockroachDB performs a full-table scan on all inserts when using partitioned unique indexes on virtual computed columns.

A workaround is to use stored computed columns.

Tracking GitHub Issues

Remove a UNIQUE index created as part of CREATE TABLE

UNIQUE indexes created as part of a CREATE TABLE statement cannot be removed without using CASCADE. Unique indexes created with CREATE INDEX do not have this limitation.


Yes No
On this page

Yes No