Oracle Tables

A table consists of rows and columns and corresponds to a single record. When creating a table you give it a name and define the columns that belong to it. You can specify the width or the precision and scale for certain columns, and some of the columns can contain default values.

Oracle uses a number of different data types for columns, for more information click here.

Dual Table

The dual table belongs to the sys schema and is created automatically when the data dictionary is created. Everything in oracle has to be in a table even results from a arithmetical expression have to be in a table, a query that retrieves those results needs a table to use and the dual table serves as a catchall table for those's expressions.

Oracle tables

There are four major types of tables in oracle

Heaped Organised Tables This is a normal standard table. Data is managed in a heap like fashion. A heap is a bunch of space and it is used in a somewhat random fashion.
Index Organised Tables Here, a table is stored in an index structure (B-Tree). The rows themselves are in order unlike a heaped table. The order is determined by the primary key.
Clustered Tables Many tables may be stored and joined together, many tables may be stored on the same data block, also data that contains the same cluster key value will be physically stored together. The data is clustered around the cluster key value. A cluster key is built using a B-Tree index.
Partitioned tables You can divide a large amount of data into subtables called partitions, according various criteria. Partitioning is especially useful in data warehousing.

Table names can be 1-30 characters in length and must start with a character, you can use _ (underscore), # (hash) or $ (dollar) in the name, however you cannot use any reserved words. By using the double quotes around the table name when creating you can get around the limitations but this is not advised.

Oracle uses a namespace which means that the same name for a object within the same namespace cannot be used, below is a list of the namespaces and where object names must be different.

namespace1 tables, views, sequences, private synonymous, procedures, functions, package, materialized views, user-defined types
namespace2 tablespaces, indexes, constraints, clusters, triggers, database links, dimensions, roles, public synonymous, profiles and pfiles.

Table Sizing

Before you create any table it is a good idea to have a estimate of what the size of the table is going to be, this allows you to make the right decision about space allocation. Oracle 10g has a table-size estimation tool within the OEM or you can use the DBMS_SPACE package.


l_used_bytes NUMBER(10);
l_allocated_bytes NUMBER(10);

DBMS_OUTPUT.PUT_LINE('used = ' || l_used_bytes || ' bytes  ' || 'allocated = ' || l_allocated_bytes || ' bytes');

OEM Admin tab -> Tables (in schema list) -> Create Button -> select table type -> enter the data -> enter estimated number of rows

To see how to reduce the size of a table see automatic segment space management.

High Water Mark

If you view a table as a flat structure as a series of blocks laid one after the other in a line from left to right, the high water mark would the right most block that ever contained data. Over a period of time the high water mark rises with the amount of data added. However if data was deleted there may be empty blocks under the high water mark until the object is rebuilt or truncated using the TRUNCATE command when deleting rows.

The high water mark is relevant since Oracle will scan all blocks under the mark, even if they do not contain data, during a full scan. To see how you can reduce the HWM see automatic segment space management.


The freelist is where Oracle keeps track of blocks under the high water mark for objects that have free space on them. Each object will have at least one freelist associated with it. You can increase Oracle performance by creating addition freelists on objects to reduce contention on objects. There are four parameters that can improve performance:

FREELIST every table manages the blocks it has allocated in the heap on a freelist. A table may have more than one freelist. Increase the number of freelists if a table is heavily used.
PCTFREE a measure of how full a block can be made during a insert process. once the block has less than pctfree it will no longer be used for inserts.
PCTUSED a measure of how empty a block must become, before it will be used for inserts again, once the block has less than PCUSED it will be available for insets again.
INITRANS The number of transaction slots initially allocated to a block. If set to low this can cause concurrency issues in a block that is used by many users. see locking for more information

Row Migration

Row migration is when a row is forced to leave the block it was created on due to insufficient space on the block. The row is migrated to another block and a pointer placed on the original block pointing to the new location. Having to many row migrations lead to poor performance as Oracle is required to perform one additional read of the pointer.

Heap Organized Tables

This is the standard oracle table and data is managed in a heaped fashion (no order), When a object table is created there is a bit of magic that happens behind the scenes, as you get hidden columns, extra indexes, pseudo columns, etc.


create table emp (
  empno number(5) primary key,
  ename varchar2(15) not null,
  national_insurance varchar2(9) not null,
  job varchar2(25),
  manager number(5),
  deptno number(3) not null constraint dept_fkey references   hr.dept(dept_id))
tablespace users;

Creating with CTAS

create table emp_new as select * from emp parallel degree 4 nologging;

Note: CTAS = create table as select, load data using multiple processes, do not log the changes to the redo logs(minimum information will always be logged).

Removing (restorable)

drop table emp cascade constraints;

Note: The table can be retrieved by the 'flashback table' command

Removing (permanently)

drop table emp purge;

Renaming a table alter table emp rename to employees;
Moving a table

alter table emp move new_tablespace;

Note: do this when you want the change any storage parameters, also remember that the row ids will change thus indexes will have to be re-created or rebuilt.

Emptying a table

trucate table emp;

Note: there is no rollback from this command its a DDL command.

Restoring a table after a drop

flashback table emp to before drop;

Note: see flashback table, this will not bring back a truncated table.

adding columns alter table emp add(retired char(1));
removing columns alter table emp drop(retired);
Marking column/s as unused

alter table emp set unused (retired);
select * from user_unused_col_tabs;

Note: once a column is marked as unsed there is no way to get it back othet than a restore.

removing unused columns alter table emp drop unused columns;
Removing unused columns and checkpointing

alter table emp drop unused columns checkpoint 10000;

Note: this stops the undo tablespace from filling up by checkpointing after every 10,000 rows.

Renaming a column alter table emp rename retired to dead;
Count number of rows select count(*) from emp;
Delete duplicate rows

delete from t where rowid in (select rid
from (select rowid rid,  
    row_number() over     
      (partition by cust_seg_nbr order by rowid) rn
   from t
)where rn <> 1 )

Temporary Tables

Temporary tables are used to hold result sets, either for the duration of a transaction or a session. The data is held in a temporary table is only ever visible to the current session - no other session will ever see any other session's data. A session will never block another session using their temporary table. Also remember that a temporary table will generate some undo information which in turn generates some redo log information. The benefits of using a temporary table are

Temporary tables do not support the following features:

When creating a temporary table, the table can exist only for the below, remember only the data is truncated the table structure remains until dropped.

Listing select table_name, temporary from user_tables where temporary = 'Y';
Creating (session table)

create global temporary table flight_status (
  destination varchar2(30),
  start_date date,
  return_date date,
  ticket_price number)
on commit preserve rows;

Creating (transaction table) create global temporary table flight_status (
  destination varchar2(30),
  start_date date,
  return_date date,
  ticket_price number)
on commit delete rows;
Creating a Temp table with CTAS

create global temporary table temp_test
on commit delete rows
select * from all_objects where 1=0;

Note: the 'on commit delete rows' is stated before the CTAS


drop table flight_status;

Note: when you drop a temporary table it does not go into the recyclebin

Index Organized Tables

IOT is a table stored in an index structure (B-Tree), an IOT is stored in a ordered fashion sorted by its primary key. When using overflow for a IOT additional data that cannot fit into the index is stored in a overflow segment, a pointer from the index pointed to the additional data in the row. Getting the right mix of data on the index block versus data in the overflow segment is the most critical part of a IOT setup. Consider the freelist as this can affect the table. pctfree is not that important and pctused doesn't come into play normally. When using a overflow segment pctfree and pctused have the same meaning as they did for a heaped tabled.

The differences between a regular oracle table and a IOT table

Regular Oracle table
Physical ROWIDs Logical ROWIDs
Uniquely identified by ROWID Uniquely identified by primary key
Unique constraints allowed Unique constraints not allowed
Can contain LONG and LOB data Can't contain LONG data
Allowed in table clusters Not allowed in table clusters
Larger space requirements Smaller space requirements
Slower data access Faster data access
Do not order data Order data

To use secondary bitmap indexes you need to create another segment to map the table, the mapping table is a heap-organized table that stores logical rowids of the index-organized table, each mapping table stores one logical rowid for the corresponding index-organized table row. The bitmap indexes are in fact built on this mapping not the underlying IOT.

IOT Rules:


create table employee_new (
  employee_id number,
  dept_id number,
  name varchar2(30),
  address varchar2(120),
  constraint pk_employee_new primary key (employee_id))
organization index tablespace empindex_01
pctthreshold 25
overflow tablespace overflow_tbs;

Note: the organization index states it's a IOT.pctthreshold states the percentage of space reserved in the index blocks, any part of the row that does not fit the 25 percent threshold value in each data block is saved in an overflow area.

Removing drop table employee_new;
Index info

select table_name, tablespace_name, iot_name, iot_type from user_tables;
select index_name, tablespace_name, index_type, table_name from user_indexes;
select segment_name, tablespace_name, segment_type from user_segments;

Secondary Bitmap Index

alter table emp_iot move mapping table;
select segment_name, tablespace_name, segment_type from user_segments; (should see one called map)
create bitmap index emp_dept_idx on emp_iot(emp_dept);

External Tables

Oracle allows the use of external tables that is tables that reside in external operating system files. The definition of an external table is created in the data dictionary which allows you to load data into other oracle tables, so no space is actually used by the external table. If you drop the external table you are actually only removing the data definition in the dictionary. see external tables for more details.

Nested Tables

A nested table is one of two collection types in Oracle, it is very similar to a child table in a traditional parent/child table pair. It give the illusion that each row in the parent table has its own child table i.e. if there are 100 rows in the parent table there are virtually 100 nested tables. There are two ways to use a nested table one is in PL/SQL code as a way to extend the PL/SQL language and the other is a physical storage mechanism for persistent storage of collections. They are very rarely used as a storage mechanism due to the following reasons:

Partitioned Tables

Oracle tables can be quite large, partitioning basically means dividing the table up into smaller chunks. All the partitions share the same logical definition, column definition and constraints. Performance is greatly improved as you only search the relevant partitions of the table during a query. Partitions can be kept on different disks to further increase performance. You can backup, index, load data partitions independently of each other. There are five ways to partition data

range partitioning used for data that can be separated into ranges based on some criterion i.e. date, part number, sequence.
hash partitioning if data in a range partition is not evenly distributed (lots of data for one particular date) then performance decreases, hash partitioning uses oracle hashing algorithms to assign a hash value to each rows partitioning key and place it in the appropriate partition. Hopefully the data will be evenly distributed across the partitions.
list partitioning used for data that can be separated into lists based on some criterion i.e. city, territory. Again partition can be unevenly distributed across the partitions.
composite range-hash partitioning basically a combination of range and hash partition making sure that the data is evenly distributed across the partitions
composite range-list partitioning basically a combination of range and list. First partition on a range of values then break up the first set of partitions using a list of discrete values.

range partitioning

create table sales_data (
ticket_no number,
sale_year int not null,
sale_month int not null,
sale_day int not null)
partition by range (sale_year, sale_month, sale_day)
(partition sales_q1 values less than (2007, 04, 01) tablespace ts1,
partition sales_q2 values less than (2007, 07, 01) tablespace ts2,
partition sales_q3 values less than (2007, 10, 01) tablespace ts3,
partition sales_q4 values less than (2008, 01, 01) tablespace ts4);

Note: ideally each of the tablespaces should be on its own disk for increased performance

Hash partitioning

create table sales_data (
ticket_no number,
sale_year int not null,
sale_month int not null,
sale_day int not null)
partition by hash (ticket_no)
partitions 4
store in (ts1, ts2, ts3, t4);

Note: you have no control on where the data is put

List Partitioning

create table sales_data (
ticket_no number,
sale_year int not null,
sale_month int not null,
sale_day int not null,
destination_city char(3),
start_city char(3),
partition by list (start_city)
(partition north_sales values ('newcastle', 'sunderland', 'leeds') tablespace ts1,
partition south_sales values ('brighton', 'bournemouth', 'cornwall') tablespace ts2,
partition east_sales values ('norwich', 'ipswitch', 'yarmouth') tablespace ts3,
partition west_sales values ('birmingham', 'cardiff', 'coventry') tablespace ts4);

composite range-hash partitioning

create table football_kit (equipno number, equipname varchar(32), price number)
partition by range (equipno) subpartition by hash(equipname)
subpartitions 8 store in (ts1, ts2, ts3, ts4)
(partition p1 values less than (1000),
partition p2 values less than (2000),
partition p3 values less than (3000),
partition p4 values less than (MAXVALUE));

composite range-list partitioning create table quarterly_sales_data (
ticket_no number,
sale_year int not null,
sale_month int not null,
sale_day int not null,
destination_city char(3),
start_city char(3),
partition by range (sale_day)
subpartition by list (start_city)
(partition q1_2007 values les than (to_date('1-apr-2007','dd-mon-yyyy')) tablespace ts1
(subpartition q12007_north_sales values ('newcastle', 'sunderland', 'leeds'),
subpartition q12007_south_sales values ('brighton', 'bournemouth', 'cornwall'),
subpartition q12007_east_sales values ('norwich', 'ipswitch', 'yarmouth'),
subpartition q12007_west_sales values ('birmingham', 'cardiff', 'coventry')
partition q2_2007 values les than (to_date('1-jul-2007','dd-mon-yyyy')) tablespace ts1
(subpartition q22007_north_sales values ('newcastle', 'sunderland', 'leeds'),
subpartition q22007_south_sales values ('brighton', 'bournemouth', 'cornwall'),
subpartition q22007_east_sales values ('norwich', 'ipswitch', 'yarmouth'),
subpartition q22007_west_sales values ('birmingham', 'cardiff', 'coventry')
partition q3_2007 values les than (to_date('1-oct-2007','dd-mon-yyyy')) tablespace ts1
(subpartition q32007_north_sales values ('newcastle', 'sunderland', 'leeds'),
subpartition q32007_south_sales values ('brighton', 'bournemouth', 'cornwall'),
subpartition q32007_east_sales values ('norwich', 'ipswitch', 'yarmouth'),
subpartition q32007_west_sales values ('birmingham', 'cardiff', 'coventry')
partition q4_2007 values les than (to_date('1-jan-2008','dd-mon-yyyy')) tablespace ts1
(subpartition q42007_north_sales values ('newcastle', 'sunderland', 'leeds'),
subpartition q42007_south_sales values ('brighton', 'bournemouth', 'cornwall'),
subpartition q42007_east_sales values ('norwich', 'ipswitch', 'yarmouth'),
subpartition q42007_west_sales values ('birmingham', 'cardiff', 'coventry')

Partition Maintenance
Adding partitions alter table sales_data
add partition sales_quarters values less than (to_date('1-apr-2007','dd-mon-yyyy')) tablespace ts4;
Splitting partitions alter vtable sales_data split partition ticket_sales01 at (2000) into
(partition ticket_sales01A, ticket_sales01B);
Merging partitions alter table ticket_sales merge partitions ticket_sales01A, ticket_sales01B into partition ticket_sales02;
Renaming partitions alter table rename partition ticket_sales01B to ticket_sales01C;
Exchanging partitions

alter table ticket_sales exchange partition ticket_sales02 with ticket_sales03;

Note : This enables you to convert a regular nonpartitioned table into a partition of a partitioned table.

Dropping partitions

Alter table sales_data drop partition sales_quarters;

Note: if you have data in the partitions you intend to drop, you need to use the 'update global indexes' clause with the preceding statement, Otherwise all globally created indexes will become invalidated. Local indexes will still be 0kay because they're mapped directly to the affected partitions only.

Coalescing partitions

alter table sales_data coalsce partition;

Note: you coalsce hash and list partitioned tables

Index Clustered Tables

A cluster is a way to store a group of tables that share some common columns in the same database blocks and to store related data together on the same block, the goal is to reduce disk I/O and therefore increase access speed when you join two tables together. You should not use a clustered table when:

Permissions grant create cluster to test02;

# Create the cluster
create cluster emp_dept( deptno number (3)) tablespace users;

# Create the two tables that are part of the cluster
create table dept ( deptno number (3) primary key) cluster emp_dept (deptno);

create table emp ( empno number (5) primary key, ename varchar2(15), deptno number (3) references dept) cluster emp_dept (deptno);

Removing drop table emp_dept;
Cluster Information select table_name, tablespace_name, cluster_name, from user_tables; (only see tables, not cluster)
select segment_name, tablespace_name, segment_type from user_segments; (only see the cluster, not tables)

Hash Cluster Tables

Hash cluster tables are similar to Index cluster tables with the exception that a cluster index is not used. The data is the index in this case. The cluster key is hashed into a block address and the data is expected to be there. The important things to really understand are:

Hash clusters are suitable when:


# Create the hash cluster
create cluster emp_dept(deptno number (3)) tablespace users hash is deptno hashkeys 200;

# Create the two tables that are part of the cluster
create table dept ( deptno number (3) primary key) cluster emp_dept (deptno);

create table emp ( empno number (5) primary key, ename varchar2(15), deptno number (3) references dept) cluster emp_dept (deptno);

Removing drop table emp_dept;

Table Comments

You can place comments on tables and columns within a table, this is helpful to anyone who will be supporting the database after the originator has left. Also note that you can also put comments on views.

Creating table comment comment on table emp is 'This is the company empolyee table';
Removing table comment comment on table emp is '';
Displaying table comments select table_name, comment from user_tab_comments;
Create table column comment comment on column emp.nat_i is 'National Insurance column';
Removing table column comment comment on column emp.nat_i is '';
Display column comments select table_name, column_name, comments from user_col_comments where table_name = 'EMP';

Table Redefinition

Oracle offers online table redefinition, which lets you redefine objects like tables while users are still reading and writing to them. You can perform the following

You can perform online table redefinition using two methods

primary key Select a primary key or pseudo-primary key to use for the redefinition. Pseudo-primary keys are unique keys with all component columns having NOT NULL constraints. For this method, the versions of the tables before and after redefinition should have the same primary key columns. This is the preferred and default method of redefinition.
rowid Use this method if no key is available. In this method, a hidden column named M_ROW$$ is added to the post-redefined version of the table. It is recommended that this column be dropped or marked as unused after the redefinition is complete. You cannot use this method on index-organized tables.

There are a number of steps involved when you redefine a tables structure

  1. Determine if a table is a good candidate for redefinition
  2. Decide the new structure for the table, and create a new image of the table
  3. Start the redefinition process by using the dbms_redefinition package
  4. Create the necessary constraints and triggers on the new table
  5. Perform periodic synchronization and validation of data in the new table
  6. Complete the redefinition of the table
  7. Cleanup

An example table redefinition

Original table employee_id   not null   number(6)
first_name               varchar2(20)
last_name     not null   varchar2(20)
email         not null   varchar2(25)
phone_number             varchar2(20)
hire_date     not null   date
job_id        not null   varchar2(20)
salary                   number(8,2)
commission_pct           number(2,2)
manager_id               number(6)
department_id            number(4)
step 1

-- Default option which uses the primary key option
  dbms_redefinition.can_redef_table('vallep','employees', dbms_redefinition.cons_use_pk);

Note: it is optional to use the cons_use_pk as this is the default method

-- Using the more complex rowid option
  dbms_redefinition.can_redef_table('vallep','employees', dbms_redefinition.cons_use_rowid);

step 2

-- create the new table adding different storage parameters, partition, etc

create table vallep.employees_temp (
employee_id      number(6),
first_name       varchar2(20) not null,
last_name        varchar2(20) not null,
email            varchar2(25) not null,
phone_number     varchar2(20),
hire_date        date not null,
job_id           varchar2(20) not null,
commission_pct   number(2,2),
manager_id       number(6),
department_id    number(4))
partition by range(employee_id)
(partition employees1 values less than (100) tablespace EMP01,
partition employess2 values less than (300) tablespace EMP02);

step 3

-- start the redefining process

'employee_id employee_id,
first_name first_name,
last_name last_name,
email email,
phone_number phone_number,
hire_date hire_date,
job_id job_id,
commission_pct comission_pct,
manager_id manager_id,
department_id department_id');

-- check the two tables
select count(*) from employees;
select count(*) from employees_temp;

step 4

-- now copy the constraint, triggers, etc
num_errors pls_integer
dbms_redefinition.copy_table dependents('vallep','employees','employees_temp',
   dbms_redefinition.cons_orig_params, true, true, true, true, num_errors);

Note: during execution two tables are created a temporary and a permanent, the temporary one called rupd$_employee and last only for the session, the permanent one holds a snapshot of all the changes made to the original employees table it a bit like a materialized view log.

Step 5

-- now need to check for any errors
select object_name, base_table_name, ddl_txt from dba_redefinition_errors;

-- now synchronize the data in the interim and source tables
exec dbms_redefinition.sync_interim_table('vallep','employees','employees_temp');

Step 6

-- complete the redefinition
exe dbms_redefinition.finish_redef_table('vallep','employees','employees_temp');

Note: the following happens
oracle reams the materialized log and updates the interim table
employees table is redefined so it has the same attributes as the interim table
referential constraints/triggers involving the employees_temp table are enabled
the two tables are briefly locked in execlusive mode to make the necessary changes to the data dictionary
the materialized view and logs are dropped

Step 7 -- once you are happy with the employees table change you can drop the temp table
drop table employees_temp;
If all hell break out

-- if you obtains any errors during step 5 or you just want to abort run the command

The easier method to perform all of this is to use OEM.

Useful Views

Useful Views
DBA_TABLES describes all relational tables in the database
DBA_TAB_COLUMNS describes the columns of all tables, views, and clusters in the database
DBA_UNUSED_COL_TABS list all tables that have columns marked as unused.
DBA_TAB_COMMENTS displays comments on all tables and views in the database
DBA_COL_COMMENTS displays comments on all tables and views in the database
DBA_TAB_PARTITIONS provides the partition-level partitioning information, partition storage parameters, and partition statistics determined by ANALYZE statements for all partitions in the database

Dump a table object

You can dump the contents of a table block, see dump a block of data