The content of this page has been automatically translated by AI. If you encounter any problems while reading, you can view the corresponding content in Chinese.
Help & Documentation>Tencent Cloud TCHouse-P>Development Guide>Basic Database Development>DDL operations for schema/table/index/view/materialized view, etc

DDL operations for schema/table/index/view/materialized view, etc

Last updated: 2024-08-22 16:59:15

Database management

Create a database

To create a database, you must be a superuser or have the special CREATEDB privilege. By default, a new database is created by cloning the standard system database template1. You can specify a different template by writing TEMPLATE name. Specifically, by writing TEMPLATE template0, you can create a clean database that contains only the standard objects predefined by your TBase version.

Creating database by using default parameters

postgres=# create database tbase_db;
CREATE DATABASE

Specifying database to be cloned

postgres=# create database tbase_db_template TEMPLATE template0;
CREATE DATABASE

Specifying owner

postgres=# create role pgxz with login;
CREATE ROLE
postgres=# create database tbase_db_owner owner pgxz;
CREATE DATABASE
List databases and their details:
List of databases
| Name | Owner | Encoding | Collate | Ctype | Access privileges | Size | Tablespace | Description |
|-----------------|-------|----------|----------|--------|------------------|-------|------------|-------------------|
| tbase_db_owner | pgxz | UTF8 | en_US.utf8 | en_US.utf8 | | 18 MB | pg_default |

Specifying encoding

postgres=# create database tbase_db_encoding ENCODING UTF8;
CREATE DATABASE
List databases and their details:
List of databases
| Name | Owner | Encoding | Collate | Ctype | Access privileges | Size | Tablespace | Description |
|----------------------|--------|----------|----------|--------|------------------|-------|------------|-------------------|
| tbase_db_encoding | tbase | UTF8 | en_US.utf8 | en_US.utf8 | | 18 MB | pg_default |

Create a GBK encoded database

postgres=# CREATE DATABASE db_gbk template template0 encoding = gbk LC_COLLATE = 'zh_CN.gbk' LC_CTYPE = 'zh_CN.gbk';
CREATE DATABASE
List databases and their details:
List of databases
| Name | Owner | Encoding | Collate | Ctype | Access privileges | Size | Tablespace | Description |
|-------|-------|----------|----------|--------|------------------|-------|------------|-------------------|
| db_gbk | tbase | GBK | zh_CN.gbk | zh_CN.gbk | | 19 MB | pg_default |

Create a GB18030 encoded database

postgres=# create database db_gb18030 template=template0 encoding=gb18030 LC_COLLATE = 'zh_CN.gb18030' LC_CTYPE = 'zh_CN.gb18030';
CREATE DATABASE
List databases and their details:
List of databases
| Name | Owner | Encoding | Collate | Ctype | Access privileges |
|-------------|-------|----------|---------------|---------------|------------------|
| db_gb18030 | tbase | GB18030 | zh_CN.gb18030 | zh_CN.gb18030 |
...

Specifying sorting rule

postgres=# create database tbase_db_lc_collate lc_collate 'C';
CREATE DATABASE
List databases and their details:
List of databases
| Name | Owner | Encoding | Collate | Ctype | Access privileges | Size | Tablespace | Description |
|----------------------|-------|----------|---------|-------|-------------------|-------|------------|-------------------|
| tbase_db_lc_collate | tbase | UTF8 | C | en_US.utf8 | | 18 MB | pg_default |

Specifying grouping rule

postgres=# create database tbase_db_lc_ctype LC_CTYPE 'C' ;
CREATE DATABASE
List databases and their details:
List of databases
| Name | Owner | Encoding | Collate | Ctype | Access privileges | Size | Tablespace | Description |
|--------------------|-------|----------|----------|-------|------------------|-------|------------|-------------------|
| tbase_db_lc_ctype | tbase | UTF8 | en_US.utf8 | C | | 18 MB | pg_default |

Configuring data connectivity

postgres=# create database tbase_db_allow_connections ALLOW_CONNECTIONS true;
CREATE DATABASE
View database connection configuration:
postgres=# select datallowconn from pg_database where datname='tbase_db_allow_connections';
datallowconn
--------------
t

Configuring the number of connections

postgres=# create database tbase_db_connlimit CONNECTION LIMIT 100;
CREATE DATABASE
View database connection number configuration:
postgres=# select datconnlimit from pg_database where datname='tbase_db_connlimit';
datconnlimit
--------------
100

Configuring database replicability

postgres=# create database tbase_db_istemplate is_template true;
CREATE DATABASE
View database template configuration:
postgres=# select datistemplate from pg_database where datname='tbase_db_istemplate';
datistemplate
---------------
t

Configuring multiple parameters together

postgres=# create database tbase_db_mul owner pgxz CONNECTION LIMIT 50 template template0 encoding 'utf8' lc_collate 'C';
CREATE DATABASE

Modify database configuration

Renaming database

postgres=# alter database tbase_db rename to tbase_db_new;
ALTER DATABASE

Modifying the number of connections

postgres=# alter database tbase_db_new connection limit 50;
ALTER DATABASE

Changing database owner

postgres=# alter database tbase_db_new owner to tbase;
ALTER DATABASE

Configuring default database running parameters

postgres=# alter database tbase_db_new set search_path to public,pg_catalog,pg_oracle;
ALTER DATABASE
More usage of SET can be found in the operation and maintenance documentation.

Unsupported features of Alter database

Item
Remarks
encoding
Encoding
lc_collate
Sorting rule
lc_ctype
Grouping rule

Dropping a Database

Dropping a Database

postgres=# drop database tbase_db_new;
DROP DATABASE
If you drop a database when a session is connected to it, the following error will be reported:
postgres=# drop database tbase_db_template;
ERROR: database "tbase_db_template" is being accessed by other users
DETAIL: There is 1 other session using the database.
You can use the following method to disconnect the session and then drop the database:
postgres=# select pg_terminate_backend(pid) from pg_stat_activity where datname='tbase_db_template';
pg_terminate_backend
----------------------
t
(1 row)
Retry deleting the database:
postgres=# drop database tbase_db_template;
DROP DATABASE

Please note that tables in Markdown format may not display vertical borders when rendering, but the above content conforms to Markdown syntax rules. If further adjustments are needed, please let me know.


Schema Management

A schema is essentially a namespace, known by different names in different database systems. In Oracle, it is usually called a user. In SQL Server, it is called a framework. In MySQL, it is called a database. A schema may contain tables, data types, functions, and operators. Object names can be duplicated in different schemas. To access objects in a specific schema, you can use the format "SchemaName.ObjectName".

Creating Schema

Standard statement:
postgres=# create schema tbase;
CREATE SCHEMA
Use the extended syntax to create a schema only if it does not exist:
postgres=# create schema if not exists tbase;
NOTICE: schema "tbase" already exists, skipping
CREATE SCHEMA
Specify the owner of the schema:
postgres=# create schema tbase_pgxz AUTHORIZATION pgxz;
CREATE SCHEMA
Use the \dn command to list schemas and their owners:
List of schemas
Name | Owner
------------+-------
tbase_pgxz | pgxz
(1 row)

Modifying schema attributes

Modify the schema name:
postgres=# alter schema tbase rename to tbase_new;
ALTER SCHEMA
Change the owner:
postgres=# alter schema tbase_pgxz owner to tbase;
ALTER SCHEMA

Deletion Mode

Schema deletion command:
postgres=# drop schema tbase_new;
DROP SCHEMA
If there are objects in the schema, dropping will fail, and error information will be prompted. You can use the CASCADE option to force delete the schema and its dependent objects:
postgres=# drop schema tbase_pgxz CASCADE;
NOTICE: drop cascades to table tbase_pgxz.t
DROP SCHEMA

Configuring User Access to Schema

A regular user needs two steps for authorization to access an object in a schema: access to the object itself and access to the schema.
Authorize users to access a specific table:
postgres=# grant select on tbase.t to pgxz;
GRANT
Authorize users to use a specific schema:
postgres=# grant USAGE on SCHEMA tbase to pgxz;
GRANT

Configuring Schema Access Order

The TBase Database uses search_path run variable to configure the access order of data objects.
Search path of the currently connected user:
postgres=# show search_path ;
search_path
-----------------
"$user", public
(1 row)
When creating a data table, if the schema is not specified, the table will be stored under the first search schema:
postgres=# create table t(id int,mc text);
NOTICE: Replica identity is needed for shard table, please add to this table through "alter table" command.
CREATE TABLE
When accessing an object not in the search path, you need to enter its full path:
postgres=# select * from t1;
ERROR: relation "t1" does not exist
postgres=# select * from tbase_schema.t1;
id | mc
----+----
(0 rows)

Creating and Deleting Data Tables

Creating tables without specifying a shard key

When creating tables without specifying a shard key, the system defaults to using the first field as the shard key.
postgres=# create table t_first_col_share(id serial not null, nickname text);
CREATE TABLE
postgres=# \d+ t_first_col_share
Column
Type
Modifiers
Storage
Stats target
Description
id
integer
not null default nextval('t_first_col_share_id_seq'::regclass)
plain


nickname
text

extended


Has OIDs: no Distribute By SHARD(id) Location Nodes: ALL DATANODES

Principles for Shard Key Selection:
Only one field can be selected as the shard key.
If there is a primary key, the primary key should be selected as the shard key.
If the primary key is a composite field, select the field with the most distinct values as the shard key.
Composite fields can also be concatenated into a new field to serve as the shard key.
If there is no primary key, UUID can be used as the shard key.
In short, ensure the data is distributed as evenly as possible.

Creating tables by specifying a shard key

When creating tables with a specified shard key, you can use the distribute by shard(column_name) statement to specify it.
postgres=# create table t_appoint_col(id serial not null, nickname text) distribute by shard(nickname);
CREATE TABLE
postgres=# \d+ t_appoint_col
Table "public.t_appoint_col"
Column
Type
Modifiers
Storage
Stats target
Description
id
integer
not null default nextval('t_appoint_col_id_seq'::regclass)
plain


nickname
text

extended


Has OIDs: no Distribute By SHARD(nickname) Location Nodes: ALL DATANODES

Creating tables by specifying a group

Creating tables by specifying a group uses the to group group_name statement to specify it.
postgres=# create table t (id integer, nc text) distribute by shard(id) to group default_group;
CREATE TABLE
postgres=# \d+ t
Table "public.t"
Column
Type
Modifiers
Storage
Stats target
Description
id
integer

plain


nc
text

extended


Has OIDs: no Distribute By SHARD(id) Location Nodes: ALL DATANODES

Logical partitioned table

Range partitioned table

Creating primary partitions and subpartitions.
postgres=# create table t_native_range (f1 bigint, f2 timestamp default now(), f3 integer) partition by range(f2) distribute by shard(f1) to group default_group;
NOTICE: Replica identity is needed for shard table, please add to this table through "alter table" command.
CREATE TABLE
Creating two subtables:
postgres=# create table t_native_range_201709 partition of t_native_range (f1, f2, f3) for values from ('2017-09-01') to ('2017-10-01');
postgres=# create table t_native_range_201710 partition of t_native_range (f1, f2, f3) for values from ('2017-10-01') to ('2017-11-01');
Note that adding subpartitions will affect the DML operations on the main table data.

Default partitioned table

An error will occur if data is inserted without a default partitioned table.
postgres=# insert into t_native_range values(2,'2017-08-01',2);
ERROR: no partition of relation "t_native_range" found for row
Adding a default partitioned table:
postgres=# CREATE TABLE t_native_range_default PARTITION OF t_native_range DEFAULT;
CREATE TABLE
postgres=# insert into t_native_range values(2,'2017-08-01',2);
INSERT 0 1

MAXVALUE partition

Creating a MAXVALUE partition.
postgres=# CREATE TABLE t_native_range_maxvalue PARTITION OF t_native_range for values from ('2017-11-01') to (maxvalue);
CREATE TABLE
postgres=# insert into t_native_range values(1,'2017-11-01',1);
INSERT 0 1
All data greater than 2017-11-01 will be stored in the subtable t_native_range_maxvalue.

MINVALUE partition

Creating a MINVALUE partition.
postgres=# CREATE TABLE t_native_range_minvalue PARTITION OF t_native_range for values from (minvalue) to ('2017-09-01');
CREATE TABLE
postgres=# insert into t_native_range values(1,'2017-08-01',1);
INSERT 0 1

View table structure

View the table structure of t_native_range.
postgres=# \d+ t_native_range
Table "tbase_pg_proc.t_native_range"
Column
Type
Collation
Nullable
Default
Storage
Stats target
Description
f1
bigint



plain


f2
timestamp without time zone


now()
plain


f3
integer



plain


Partition key: RANGE (f2) Partitions: ... Distribute By: SHARD(f1) Location Nodes: ALL DATANODES

List partitioned table

Creating primary partitions and subpartitions.
postgres=# create table t_native_list(f1 bigserial not null, f2 text, f3 integer, f4 date) partition by list(f2) distribute by shard(f1) to group default_group;
Create two subtables, storing "Guangdong" and "Beijing" respectively.
postgres=# create table t_list_gd partition of t_native_list(f1, f2, f3, f4) for values in ('Guangdong');
postgres=# create table t_list_bj partition of t_native_list(f1, f2, f3, f4) for values in (' Beijing ');
View the table structure:
postgres=# \d+ t_native_list
Table "tbase.t_native_list"
Column
Type
Collation
Nullable
Default
Storage
Stats target
Description
f1
bigint

not null
nextval('t_native_list_f1_seq'::regclass)
plain


f2
text



extended


f3
integer



plain


f4
date



plain


Partition key: LIST (f2) Partitions: ... Distribute By: SHARD(f1) Location Nodes: ALL DATANODES

Creating a default partition

An error will occur if there is no default partition, and data insertion will fail.
postgres=# insert into t_native_list values(1,' Shanghai ',1,current_date);
ERROR: no partition of relation "t_native_list" found for row
After creation, insertion can proceed normally.
postgres=# CREATE TABLE t_native_list_default PARTITION OF t_native_list DEFAULT;
CREATE TABLE
postgres=# insert into t_native_list values(1,' Shanghai ',1,current_date);
INSERT 0 1

Hash partitioned table

When creating a hash table with 4 subpartitions, the number of partitions must be specified. The partition number is used as an operator to calculate the partition of each row of data. Currently, hash partitioning does not support add and delete operations.
postgres=# create table t_hash_partition(f1 int, f2 int) partition by hash(f2);
create table t_hash_partition_1 partition of t_hash_partition FOR VALUES WITH(MODULUS 4, REMAINDER 0);
create table t_hash_partition_2 partition of t_hash_partition FOR VALUES WITH(MODULUS 4, REMAINDER 1);
create table t_hash_partition_3 partition of t_hash_partition FOR VALUES WITH(MODULUS 4, REMAINDER 2);
create table t_hash_partition_4 partition of t_hash_partition FOR VALUES WITH(MODULUS 4, REMAINDER 3);
Insert data:
postgres=# insert into t_hash_partition values(1,1),(2,2),(3,3);
COPY 3
Query result:
postgres=# select * from t_hash_partition;
+----+----+
| f1 | f2 |
+----+----+
| 1 | 1 |
| 3 | 3 |
| 2 | 2 |
+----+----+
(3 rows)
TBase automatically performs pruning based on partition values:
postgres=# explain select * from t_hash_partition where f2=2;
QUERY PLAN
-------------------------------------------------------------------------------
Remote Fast Query Execution (cost=0.00..0.00 rows=0 width=0)
Node/s: dn001, dn002
-> Append (cost=0.00..26.88 rows=7 width=8)
-> Seq Scan on t_hash_partition_3 (cost=0.00..26.88 rows=7 width=8)
Filter: (f2 = 2)
(5 rows)

Multi-level partitioned table

Create Main Table

postgres=# create table t_native_mul_list(f1 bigserial not null, f2 integer, f3 text, f4 text, f5 date)
partition by list (f3) distribute by shard(f1) to group default_group;
NOTICE: Replica identity is needed for shard table, please add to this table through "alter table" command.
CREATE TABLE

Create Secondary Table

postgres=# create table t_native_mul_list_gd partition of t_native_mul_list for values in ('Guangdong')
partition by range(f5);
NOTICE: Replica identity is needed for shard table, please add to this table through "alter table" command.
CREATE TABLE

postgres=# create table t_native_mul_list_bj partition of t_native_mul_list for values in (' Beijing ')
partition by range(f5);
NOTICE: Replica identity is needed for shard table, please add to this table through "alter table" command.
CREATE TABLE

postgres=# create table t_native_mul_list_sh partition of t_native_mul_list for values in (' Shanghai ');
NOTICE: Replica identity is needed for shard table, please add to this table through "alter table" command.
CREATE TABLE

Create Tertiary Table

postgres=# create table t_native_mul_list_gd_201701 partition of t_native_mul_list_gd(f1,f2,f3,f4,f5)
for values from ('2017-01-01') to ('2017-02-01');
NOTICE: Replica identity is needed for shard table, please add to this table through "alter table" command.
CREATE TABLE

postgres=# create table t_native_mul_list_gd_201702 partition of t_native_mul_list_gd(f1,f2,f3,f4,f5)
for values from ('2017-02-01') to ('2017-03-01');
NOTICE: Replica identity is needed for shard table, please add to this table through "alter table" command.
CREATE TABLE

postgres=# create table t_native_mul_list_bj_201701 partition of t_native_mul_list_bj(f1,f2,f3,f4,f5)
for values from ('2017-01-01') to ('2017-02-01');
NOTICE: Replica identity is needed for shard table, please add to this table through "alter table" command.
CREATE TABLE

postgres=# create table t_native_mul_list_bj_201702 partition of t_native_mul_list_bj(f1,f2,f3,f4,f5)
for values from ('2017-02-01') to ('2017-03-01');
NOTICE: Replica identity is needed for shard table, please add to this table through "alter table" command.
CREATE TABLE
TBase supports mixed use of level 1 and level 2 partitions, partitions do not need to be equal level.

Database Replication Table

Replication tables store the same data on all dn nodes. After creating a replication table, you can insert data and query the same results on different nodes. Replication tables do not support triggers and foreign keys.

Create Replication Table

CREATE TABLE t_rep (id INT, mc TEXT) DISTRIBUTE BY REPLICATION TO GROUP default_group;

Inserting Data

INSERT INTO t_rep VALUES(1, 'TBase'), (2, 'pgxz');

Querying Data

EXECUTE DIRECT ON (dn001) 'SELECT * FROM t_rep';
EXECUTE DIRECT ON (dn002) 'SELECT * FROM t_rep';

Columnar Storage Table Management

Columnar storage tables are only supported in TBase-v3.

Create Columnar Storage Table

CREATE TABLE t_col_test (f1 INT, f2 VARCHAR(32), f3 DATE) WITH (orientation='column');

View Columnar Storage Table Structure

\d+ t_col_test

Specify Compression Type

Use a uniform compression level for all columns:
CREATE TABLE t_col1 (f1 INT, f2 INT, f3 DATE) WITH (orientation=column, compression=HIGH);
Configure Default Compression Level:
SHOW default_rel_compression;
SET default_rel_compression TO 'middle';
Set different compression levels for different columns:
CREATE TABLE t_col3 (
f1 INT ENCODING(compression=no),
f2 INT ENCODING(compression=low),
f3 INT ENCODING(compression=middle),
f4 INT ENCODING(compression=high)
) WITH (orientation=column);

Columnar partitioned table

Create Main Table

CREATE TABLE t_native_range (
f1 BIGINT,
f2 TIMESTAMP DEFAULT NOW(),
f3 INTEGER
) PARTITION BY RANGE (f2) WITH (orientation=column, compression=low);

Create Subtable

CREATE TABLE t_native_range_201709 PARTITION OF t_native_range FOR VALUES FROM ('2017-09-01') TO ('2017-10-01') WITH (orientation=column, compression=low);
CREATE TABLE t_native_range_201710 PARTITION OF t_native_range FOR VALUES FROM ('2017-10-01') TO ('2017-11-01') WITH (orientation=column, compression=low);

View table structure

\d+ t_native_range

Columnar Stash Table

The Stash feature for columnar tables allows individual insert or update operation data to be initially stored in the Stash, then persistently stored in the columnar storage structure as per the settings.

Create Stash Table

CREATE TABLE t_stash (f1 INT, f2 INT) WITH (orientation=column, stash_enabled=on);

Modify Table to Stash Table

ALTER TABLE t_stash SET (stash_enabled=off);

Foreign Table Management

Create hdfs_fdw Plugin

CREATE EXTENSION hdfs_fdw;

Create Server

Create COS Server

CREATE SERVER $cos_server_name
FOREIGN DATA WRAPPER hdfs_fdw
OPTIONS (
address 'cos://$bucketname',
appid '$appid',
access_keyid '$ak',
secret_accesskey '$sk',
region '$region',
client_type 'cos'
);

Create HDFS Server

CREATE SERVER $hdfs_server_name
FOREIGN DATA WRAPPER hdfs_fdw
OPTIONS (
address 'ofs://xxxxxx.chdfs.ap-guangzhou.myqcloud.com',
appid '$appid',
client_type 'hdfs'
);

Create Fusion Bucket Server

CREATE SERVER $cosn_server_name
FOREIGN DATA WRAPPER hdfs_fdw
OPTIONS (
address 'cosn://$bucketname',
appid '$appid',
access_keyid '$ak',
secret_accesskey '$sk',
region '$region',
client_type 'cosn'
);

Create Postgres Server

CREATE SERVER postgres_fdw_postgres_db
FOREIGN DATA WRAPPER postgres_fdw
OPTIONS (
host '172.16.64.14',
port '11345',
dbname 'postgres'
);

Configure User Mapping

CREATE USER MAPPING FOR $TDSQL-A_USER SERVER $cos_server_name;
CREATE USER MAPPING FOR $TDSQL-A_USER SERVER $hdfs_server_name;
CREATE USER MAPPING FOR $TDSQL-A_USER SERVER $cosn_server_name;

Create External Table

Create Non-partitioned Table

CREATE FOREIGN TABLE test_csv(
id INT,
name TEXT
)
SERVER $ServerName
OPTIONS (
FORMAT 'csv',
FOLDERNAME '$data directory/',
distribute 'shard'
);

Create Table in TEXT Format

CREATE FOREIGN TABLE test_text(
id INT,
name TEXT
)
SERVER $ServerName
OPTIONS (
FORMAT 'text',
DELIMITER '$Column Separator',
FOLDERNAME '$data directory/',
distribute 'shard'
);

Create Table in ORC Format

CREATE FOREIGN TABLE orc_table(
a BIGINT,
b TEXT,
c FLOAT
)
SERVER $ServerName
OPTIONS (
FORMAT 'orc',
FOLDERNAME '$data directory/',
distribute 'shard'
);

Creating a Partition Table

CREATE FOREIGN TABLE login_logs_parquet(
l_id TEXT,
l_loginName TEXT,
l_date TEXT,
year TEXT,
month TEXT
)
SERVER cosn_server
OPTIONS (
FORMAT 'parquet',
FOLDERNAME '$data directory/',
distribute 'shard',
PARTITION 'year,month'
);

Query the external table and import data into the internal table

INSERT INTO Internal Table SELECT * FROM External Table;

dblink External Table

Create dblink

CREATE DATABASE LINK abc
CONNECT TO "TBASE"
IDENTIFIED BY 'tbase2018'
USING '(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 172.16.0.17)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = ora12c)
)
)';

Query dblink System Table

SELECT * FROM pg_dblink;

Access Remote Table

SELECT * FROM t1@abc;

Drop dblink

DROP DATABASE LINK abc;

tbase_dblink

Create postgres_fdw

CREATE EXTENSION postgres_fdw;

Create dblink

SET dblink_types = 'postgresql';
CREATE DATABASE LINK tbase_dblink
CONNECT TO "tbase"
IDENTIFIED BY 'tbase@2017'
USING '(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 172.16.0.23)(PORT = 11379))
(CONNECT_DATA =(SERVER = DEDICATED)(SERVICE_NAME = postgres)(SCHEMA_NAME = public))
(COLLATE = YES)(DEFAULT = YES)(NOT_NULL = YES)
)';

Query dblink System Table

SELECT * FROM pg_dblink WHERE dblinkname='tbase_dblink';

Access Remote Table

SELECT * FROM t2@tbase_dblink;

Drop dblink

DROP DATABASE LINK tbase_dblink;

Create table using IF NOT EXISTS

CREATE TABLE IF NOT EXISTS t(id INT, mc TEXT);

Create table with designated mode

CREATE TABLE public.t(id INT, mc TEXT);

Create data table using query results

CREATE TABLE t(id INT, mc TEXT) DISTRIBUTE BY SHARD(mc);
INSERT INTO t VALUES(1, 'tbase');
CREATE TABLE t_as AS SELECT * FROM t;

Dropping a Table

DROP TABLE t;
DROP TABLE public.t;
DROP TABLE IF EXISTS t;
DROP TABLE tbase_schema.t1 CASCADE;

Drop partitioned subtable

DROP TABLE t_time_range_part_1;

Add partitioned subtable

Add partitioned subtable at the end

ALTER TABLE t1_pt ADD PARTITIONS 2;

Add partitioned subtable in the middle

CREATE TABLE t_time_range_part_1 (
f1 BIGINT NOT NULL,
f2 TIMESTAMP WITHOUT TIME ZONE,
f3 BIGINT
);
ALTER TABLE t_time_range_part_1 ADD CONSTRAINT t_time_range_pkey_part_1 PRIMARY KEY (f1);
CREATE INDEX t_time_range_f2_idx_part_1 ON t_time_range_part_1(f2);
INSERT INTO t_time_range VALUES(1, '2017-10-1', 1);
SELECT * FROM t_time_range;

Create and drop index

Regular Index

postgres=# create index t_appoint_id_idx on t_appoint_col using btree(id);
CREATE INDEX

Unique index

Creating unique index

postgres=# create unique index t_first_col_share_id_uidx on t_first_col_share using btree(id);
CREATE INDEX
You cannot create a unique index for non-shard key fields
postgres=# create unique index t_first_col_share_nickname_uidx on t_first_col_share using btree(nickname);
Error message:
ERROR: Unique index of partitioned table must contain the hash/modulo distribution column.

Index on expression

Create table t_upper
postgres=# create table t_upper(id int,mc text);
Create index t_upper_mc
postgres=# create index t_upper_mc on t_upper(mc);
Insert data and analyze
postgres=# insert into t_upper select t,md5(t::text) from generate_series(1,10000) as t;
postgres=# analyze t_upper;
Query plan, use expression index.
postgres=# explain select * from t_upper where upper(mc)=md5('1');

Conditional index

Create table t_sex
postgres=# create table t_sex(id int,sex text);
Create index t_sex_sex_idx
postgres=# create index t_sex_sex_idx on t_sex (sex);
Insert data and analyze
postgres=# insert into t_sex select t,'男' from generate_series(1,1000000) as t;
postgres=# insert into t_sex select t,'女' from generate_series(1,100) as t;
postgres=# analyze t_sex;
Query plan, use conditional index.
postgres=# explain select * from t_sex where sex ='女';

gist index

Create table t_trgm and create gist index
postgres=# create table t_trgm (id int,trgm text,no_trgm text);
postgres=# create index t_trgm_trgm_idx on t_trgm using gist(trgm gist_trgm_ops);
Note: Only row-store table is supported

gin index

pg_trgm index

Delete index t_trgm_trgm_idx
postgres=# drop index t_trgm_trgm_idx;
Create GIN index t_trgm_trgm_idx
postgres=# create index t_trgm_trgm_idx on t_trgm using gin(trgm gin_trgm_ops);
Note: Only row-store table is supported

JSONB index

Create table t_jsonb and create JSONB index
postgres=# create table t_jsonb(id int,f_jsonb jsonb);
postgres=# create index t_jsonb_f_jsonb_idx on t_jsonb using gin(f_jsonb);

Array index

Create table t_array and insert data
postgres=# create table t_array(id int, mc text[]);
postgres=# insert into t_array select t,('{'||md5(t::text)||'}')::text[] from generate_series(1,1000000) as t;
Create array index t_array_mc_idx
postgres=# create index t_array_mc_idx on t_array using gin(mc);

Btree_GIN any field index

Create table gin_mul and insert data
postgres=# create table gin_mul(f1 int, f2 int, f3 timestamp, f4 text, f5 numeric, f6 text);
postgres=# insert into gin_mul select random()*5000, random()*6000, now()+((3000060000*random())||' sec')::interval , md5(random()::text), round((random()*100000)::numeric,2), md5(random()::text) from generate_series(1,1000000);
Create btree_gin extension and create index gin_mul_gin_idx
postgres=# create extension btree_gin;
postgres=# create index gin_mul_gin_idx on gin_mul using gin(f1,f2,f3,f4,f5,f6);

Multi-Field index

Create an index that includes multiple fields to improve query performance, especially when using OR query conditions. However, note that Bitmap scanning supports conditions on at most two different fields.
CREATE TABLE t_mul_idx (f1 int, f2 int, f3 int, f4 int);
CREATE INDEX t_mul_idx_idx ON t_mul_idx(f1, f2, f3);

Precautions for using multi-field index

When using OR query conditions, Bitmap scanning supports conditions on at most two different fields.

Sample query plan

Query using multi-field index:
EXPLAIN SELECT * FROM t_mul_idx WHERE f1=1 OR f2=2;
When the query condition exceeds two fields, Seq Scan will be used instead of Bitmap Heap Scan:
EXPLAIN SELECT * FROM t_mul_idx WHERE f1=1 OR f2=2 OR f3=3;

Index and Performance

If all the return fields of the query are in the index, an Index Only Scan can be implemented, reducing I/O overhead.
In insert operations, multi-field indexing usually performs better than single-field indexing.

Index > Global index

Global Index (Global Index) is used to resolve data localization issues in MPP distributed databases, especially in the absence of a scatter key.

Supported Features

General shard table global index: Version >= 5.06.2
Partition table global index: Version >= 5.06.3
Distributed global index: Version >= 5.06.3

Create Global Index

CREATE TABLE t1 (f1 int, f2 int);
CREATE GLOBAL INDEX t1_f2_idx ON t1(f2);

View Index Type

\d+ t1

Global Index Execution Plan

EXPLAIN SELECT * FROM t1 WHERE f2=1;

Global Unique Index

CREATE GLOBAL UNIQUE INDEX t1_f2_gidx ON t1(f2);

Features Not Supported by Global Index

The truncate operation is not supported.
The reindex of global indexes is not supported.
Creating global indexes CONCURRENTLY is not supported.
Executing vacuum full on tables with global indexes is not supported.

Deleting an Index

DROP INDEX t_appoint_id_idx;

Modifying table structure

Modifying table name

postgres=# ALTER TABLE t RENAME TO tbase;

Add comments to tables or fields

postgres=# COMMENT ON TABLE tbase IS 'TBase distributed relational database system';
postgres=# COMMENT ON COLUMN tbase.nickname IS 'TBase nickname is Elephant';

Add fields to table

postgres=# ALTER TABLE tbase ADD COLUMN age INTEGER;

Modify field type

postgres=# ALTER TABLE tbase ALTER COLUMN age TYPE FLOAT8;

Modify field default value

postgres=# ALTER TABLE tbase ALTER COLUMN age SET DEFAULT 0.0;

Delete field

postgres=# ALTER TABLE tbase DROP COLUMN age;

Add primary key

postgres=# ALTER TABLE t ADD CONSTRAINT t_id_pkey PRIMARY KEY (id);

Delete primary key

postgres=# ALTER TABLE t DROP CONSTRAINT t_id_pkey;

Rebuild primary key

postgres=# CREATE UNIQUE INDEX CONCURRENTLY t_id_temp_idx ON t (id);
postgres=# ALTER TABLE t DROP CONSTRAINT t_pkey, ADD CONSTRAINT t_pkey PRIMARY KEY USING INDEX t_id_temp_idx;

Add foreign key

CREATE TABLE t_p (f1 INT NOT NULL, f2 INT, PRIMARY KEY (f1));
CREATE TABLE t_f (f1 INT NOT NULL, f2 INT);
postgres=# ALTER TABLE t_f ADD CONSTRAINT t_f_f1_fkey FOREIGN KEY (f1) REFERENCES t_p (f1);

Delete foreign key

postgres=# ALTER TABLE t_f DROP CONSTRAINT t_f_f1_fkey;

Modify the schema of the table

postgres=# ALTER TABLE t SET SCHEMA public;

Change the table owner

postgres=# ALTER TABLE tbase OWNER TO pgxz;

Change column name

postgres=# ALTER TABLE tbase RENAME COLUMN city TO cityname;

Modify table fill factor

postgres=# ALTER TABLE t1 SET (fillfactor=80);

Add a trigger

INSERT trigger

postgres=# CREATE OR REPLACE FUNCTION t_trigger_insert_trigger_func() RETURNS TRIGGER AS $body$
BEGIN
IF NEW.f2 < 0 THEN
NEW.f2 := 0;
END IF;
RETURN NEW;
END;
$body$
LANGUAGE plpgsql;

postgres=# CREATE TRIGGER t_trigger_insert_trigger BEFORE INSERT ON t_trigger FOR EACH ROW EXECUTE PROCEDURE t_trigger_insert_trigger_func();

UPDATE trigger

postgres=# CREATE OR REPLACE FUNCTION t_trigger_update_trigger_func() RETURNS TRIGGER AS $body$
BEGIN
IF NEW.f2 < 0 THEN
NEW.f2 := OLD.f2;
END IF;
RETURN NEW;
END;
$body$
LANGUAGE plpgsql;

postgres=# CREATE TRIGGER t_trigger_update_trigger BEFORE UPDATE ON t_trigger FOR EACH ROW EXECUTE PROCEDURE t_trigger_update_trigger_func();

DELETE trigger

postgres=# CREATE OR REPLACE FUNCTION t_trigger_delete_trigger_func() RETURNS TRIGGER AS $body$
BEGIN
IF OLD.f2 = 0 THEN
RETURN NULL;
END IF;
RETURN OLD;
END;
$body$
LANGUAGE plpgsql;

postgres=# CREATE TRIGGER t_trigger_delete_trigger BEFORE DELETE ON t_trigger FOR EACH ROW EXECUTE PROCEDURE t_trigger_delete_trigger_func();

Multiple event triggers

postgres=# CREATE TABLE t_trigger_mulevent(f1 INT, f2 INT);
CREATE OR REPLACE FUNCTION t_trigger_mulevent_func() RETURNS TRIGGER AS $body$
BEGIN
IF NEW.f2 < 0 THEN
NEW.f2 := 0;
END IF;
RETURN NEW;
END;
$body$
LANGUAGE plpgsql;

postgres=# CREATE TRIGGER t_trigger_insert_update_trigger BEFORE INSERT OR UPDATE ON t_trigger_mulevent FOR EACH ROW EXECUTE PROCEDURE t_trigger_mulevent_func();

Deleting Triggers

postgres=# DROP TRIGGER t_trigger_insert_update_trigger ON t_trigger_mulevent;
postgres=# DROP FUNCTION t_trigger_mulevent_func();

Creating and dropping views

Creating View

postgres=# create view t_range_view as select * from t_range;
CREATE VIEW
postgres=# select * from t_range_view;
f1
f2
f3
f4
1
2017-09-27 23:17:39.674318
1

2
2017-09-27 23:17:39.674318
50

2
2017-09-27 23:17:39.674318
110

1
2017-09-27 23:39:45.841093
151

3
2017-09-27 23:17:39.674318
100

(5 rows)

Redefining data type
postgres=# create view t_range_view as select f1,f2::date from t_range;
CREATE VIEW
postgres=# select * from t_range_view;
f1
f2
1
2017-09-27
2
2017-09-27
2
2017-09-27
1
2017-09-27
3
2017-09-27
(5 rows)

Redefine and rename data type.
postgres=# create view t_range_view as select f1,f2::date as mydate from t_range;
CREATE VIEW
postgres=# select * from t_range_view;
f1
mydate
1
2017-09-27
2
2017-09-27
2
2017-09-27
1
2017-09-27
3
2017-09-27
(5 rows)

TBase supports synced renaming for tables or fields referenced in a view to avoid affecting data.
postgres=# \d+ t_view
View "tbase.t_view"
Column | Type | Collation | Nullable | Default | Storage | Description
--------+---------+-----------+----------+---------+----------+-------------
id | integer | | | | plain |
mc | text | | | | extended |
View definition:
SELECT t.id,
t.mc
FROM t;

postgres=# alter table t rename to t_new;
ALTER TABLE
Time: 62.875 ms

postgres=# alter table t_new rename mc to mc_new;
ALTER TABLE
Time: 22.081 ms

postgres=# \d+ t_view
View "tbase.t_view"
Column | Type | Collation | Nullable | Default | Storage | Description
--------+---------+-----------+----------+---------+----------+-------------
id | integer | | | | plain |
mc | text | | | | extended |
View definition:
SELECT t_new.id,
t_new.mc_new AS mc
FROM t_new;