Showing posts with label Tutorial. Show all posts
Showing posts with label Tutorial. Show all posts

How to Pin objects to shared Pool?

How to Pin objects to shared Pool?
Pinning objects to the shared pool is a key to tuning your shared pool. Having objects pinned will reduce fragmentation and changes of encountering the ORA-04031 error.

You must determine which objects to pin. These are particular to your own database, the application you are running, the size of your database, and the activity on your database.

You need to pin objects when any of these happen:
1. If you have encountered the ORA-04031 already and need to resolve it or
2. You can also pin large packages frequently used by the users. (It is better to pin necessary objects while startup of database.

Pinning a package to shared pool
1. Oracle automatically loads SYS.STANDARD, SYS.DBMS_STANDARD and SYS.DIUTIL.  Here is an example: 
 
pk1 is a package with a variable called dummy.  Assigning dummy to a value and then executing the package will load it into the shared pool: 

Example:-                                                                   
begin  
pk1.dummy := 0 ; /* THIS ASSIGNMENT TO THE DUMMY VARIABLE IS BY */  
end;               /*    EXECUTING THE PACKAGE. */   
                                                                            
2.Then you must pin the package. Here is an example: 
execute dbms_shared_pool.keep(owner.pk1);

Pinning stored procedure/function to shared pool
You can pin procedures and triggers with the dbms_shared_pool procedure. Either procedures or packages can be pinned with the 'P' flag, which is the default value (so you can leave it out). Triggers are pinned with 'R' and anonymous plsql blocks need any letter other than [p,P,r,R] as a flag.  Refer to dbmspool.sql for more documentation. 

Here is an example: 
execute dbms_shared_pool.keep(owner.trigger, 'R');

What objects should I pin?
You can check the x$ksmlru fixed table.  This table keeps track of the objects and the corresponding number of objects flushed out of the shared pool to allocate space for the load. These objects are stored and flushed out based on the Least Recently Used (LRU) algorithm. 
             
Because this is a fixed table, once you query the table, Oracle will automatically reset the table, thus, you can only query the table once. Suggestion for workaround: spool the output to a file so you can capture the output for analysis. 
       
describe x$ksmlru        
    
Table or View x$ksmlru          
Name                            Null?    Type             
------------------------------- -------- --------------            ADDR                                     RAW(4) 
INDX                                     NUMBER             
KSMLRCOM                                 VARCHAR2(20)             
KSMLRSIZ                                 NUMBER                     
KSMLRNUM                                 NUMBER            
 
KSMLRNUM stores the number of objects that were flushed to load the large object. KSMLRISZ stores the size of the object that was loaded (contiguous memory allocated) 

We do not need the other columns. 
 
Here is an example of a query you issue to find all the objects that are larger than size 5k which you may want to pin:    

select * from x$ksmlru where ksmlrsiz > 5000;    
                      
In general, pinning SYS.STANDARD ,SYS.DBMS_STANDARD & SYS.DIUTIL which are large packages used by Oracle, should help.   
 
SYS.DIUTIL is used only during generations of SQL*forms so it may not be necessary to pin this package in your production database. 

A Basic Tutorial on Oracle9i Forms and Reports

Covering Developer version 9i for Windows NT/2000/XP By Richard Holowczak

This tutorial introduces the Oracle9i Developer Suite Release 2 (August, 2002) that includes Oracle Forms 9.0 and Oracle Reports 9.0. The main objectives are to demonstrate and provide hands-on instructions on creating and modifying data entry and query forms in various configurations, reports and graphics.

Caveats: Please note that Oracle tends to change things like menu items, prompts and other small things between each major (certainly) and often minor release. Depending on the exact release of Oracle9i Developer Suite you have, you may find some small discrepencies between what is shown in this tutorial and what you see on your screen.

Prerequisites
Before following this tutorial, a student must have a valid user account in an Oracle server or a local installation of an Oracle database. Contact your DBA or systems administrator to learn the details of how Oracle server is set up in your organization.

If you have a local instance of the Oracle database, please do not use the SYSTEM (DBA) account to practice development. There is a good chance you can corrupt the entire database by mistake. You are much better off making use of the SCOTT/TIGER schema or (even better) creating a separate user (schema) for this tutorial work and for practicing.

One more word of caution. Oracle has designed Oracle9i as a three tier architecture. This means that the database (Oracle9i server) should run on one server, the application Server (oracle9i Application Server or Oracle9iAS Containers for Java (OC4J) server as used here) runs on a second server and the client application (written in Oracle Forms and Reports for example) runs on a third machine.

Contents




About the Author: Richard Holowczak is an Associate Professor of Computer Information Systems at the Zicklin School of Business in Baruch College, City University of New York.

Oracle XE-Data Uploading from CSV file

This is a step-by-step guide on the simplest of the simplest ways by which data can be uploaded from CSV file to Oracle table. I have done uploading using this way for as many as 10,000 records without any performance issues. It went very swiftly.

Login to OracleXE first and follow the instructions below:

Step 1 - Click Utilities


Step 2 - Click Data Load/Unload


Step 3 - Click Load


Step 4


Step 5 - Click Load Spreadsheet Data


Step 6 - Follow the screen


Step 7 - Follow the screen


Step 8 - Follow the screen


Step 9 - Follow the screen


Step 10 - Follow the screen


Making Oracle Case sensitive, again

This article is a continuation on Making Oracle Case Insensitive. Read that first to understand the context discussed here.


Oracle is always Case Sensitive while querying data. Now we have learned how to make Oracle Case Insensitive for queries. Let us ask some questions to get going.

What is the purpose of making Oracle Case Insensitive?
By making Oracle Case Insensitive, you need not any more use Case mixed words. For example Test, TEST, tESt are all same once you have made Oracle Case Insensitive. Its use could be in Data warehousing applications.

How to make Oracle Case Sensitive back again after this?
You have to alter two session paramaters NLS_COMP and NLS_SORT.

First set NLS_COMP to binary.

ALTER SESSION SET nls_comp=binary;

Secondly set NLS_SORT to generic.

ALTER SESSION SET nls_sort=GENERIC_M_CI;

Once these variables are altered Oracle is back in business as case sensitive.

Compile full schema in Oracle

To compile an entire schema in Oracle, there are two utilities provided by Oracle.

  • DBMS_UTILITY.COMPILE_SCHEMA
  • Two procedures in UTL_RECOMP

DBMS_UTILITY.COMPILE_SCHEMA
This package prior to 10g would recompile all objects (prior to Oracle 10g) and optionally recompile all Invalid Objects (starting from Oracle 10g). This procedure is available with all schemas (there is no need for giving additional privileges).

I am using Oracle 10g (10.2.0.3.0) for the following examples.

Syntax:
DBMS_UTILITY.COMPILE_SCHEMA( schema VARCHAR2, compile_all BOOLEAN, reuse_settings BOOLEAN);

Example (from SQL *Plus):
EXEC DBMS_UTILITY.COMPILE_SCHMEA( schema => 'SCOTT', compile_all => FALSE);

For recompiling all objects irrespective or VALID or INVALID use the following example (simply omit the compile_all parameter):

EXEC DBMS_UTILITY.COMPILE_SCHMEA( schema => 'SCOTT');

UTL_RECOMP Package
This package has been introduced from Oracle 10g Release 2, which is specially designed for recompiling invalid objects. It recompiles PLSQL modules, Java procedures etc. Only a SYSDBA account can run the subprograms in this package. But always a DBA can grant EXECUTE privilege to your schema (or PUBLIC).

There are two ways in which invalid objects are recompiled if you choose to use this package.
  1. Sequentially
  2. Parallelly

By using the subprogram RECOMP_SERIAL for compiling invalid objects sequentially one by one. The syntax of the subprogram is as follows:

UTL_RECOMP.RECOMP_SERIAL( schema VARCHAR2, flags BINARY_INTEGER);

Simply call the program as follows:

EXEC UTL_RECOMP.RECOMP_SERIAL (schema => 'SCOTT');

The subprogram RECOMP_PARALLEL is available within this package UTL_RECOMP to enable multiple threads for recompiling invalid objects. The syntax of the procedure is as follows:

UTL_RECOMP.RECOMP_PARALLEL (threads BINARY_INTEGER, schema VARCHAR2, flags BINARY_INTEGER);

Note: By using more number of threads the recompiling could be slow taken to consideration the I/O intensive nature of recompiling invalid objects. There is no optimized number of threads to be used while recompiling, but as a rule of thumb use minimum number of threads as possible.

Example of usage of RECOMP_PARALLEL procedure:
EXEC UTL_RECOMP.RECOMP_SERIAL (schema => 'SCOTT');

Coming soon: Article on comparison of these procedures.

How to validate values in PL/SQL

I chose to write on this topic in order to increase the readability of programs and to maintain a standard way of validating values from within PL/SQL blocks, whether be it Anonymous blocks or Stored Procedures or Stored Functions or Packages. There has been always need for writing efficient piece of code before delivering the final one.

Validating NULL Values

NVL Function
This function is boon to programmers. I always chose to use this function because of its simplicity. In Oracle one must always remember that while validating with a NULL value the result you are going to get is also a NULL. So in order to avoid this it is better to use NVL() function.

Syntax:
NVL(variable1, variable2)

Example:
SELECT NVL(NULL, 'This is the output') null_test FROM dual;

Result:
null_test
This is the output

Both the parameters are required for NVL function to perform. You can use this function in WHERE clause so that you are not going to omit any NULL values from your query.

NVL2 Function
NVL2 function is an extension of NVL function that it can return two values, one when the variable to be tested is NULL and when the variable to be tested is NOT NULL.

Syntax:
NVL2(variable1, variable1_if_not_null, variable1_if_null)

Example 1:
SELECT NVL2(NULL,'NOT NULL','NULL') nvl2_test FROM dual;

Result:
nvl2_test
NULL

Example 2:
SELECT NVL2('some value','NOT NULL','NULL') nvl2_test FROM dual;

Result:
nvl2_test
NOT NULL

Validating NOT NULL Values
While validating NOT NULL values there is no need to use such functions as the value will be present already in the variable. But if the check is for whether the value is null or not then NVL2 function will be best suitable for the purpose.

PLSQL predefined exceptions

An internal exception is raised implicitly whenever your PL/SQL program violates an Oracle rule or exceeds a system-dependent limit. Every Oracle error has a number, but exceptions must be handled by name. So, PL/SQL predefines some common Oracle errors as exceptions. For example, PL/SQL raises the predefined exception NO_DATA_FOUND if a SELECT INTO statement returns no rows.

To handle other Oracle errors, you can use the OTHERS handler. The functions SQLCODE and SQLERRM are especially useful in the OTHERS handler because they return the Oracle error code and message text. Alternatively, you can use the pragma EXCEPTION_INIT to associate exception names with Oracle error codes.

PL/SQL declares predefined exceptions globally in package STANDARD, which defines the PL/SQL environment. So, you need not declare them yourself. You can write handlers for predefined exceptions using the names shown in the list below. Also shown are the corresponding Oracle error codes and SQLCODE return values.

The following are the predefined Oracle PLSQL exceptions:
ACCESS_INTO_NULL
COLLECTION_IS_NULL
CURSOR_ALREADY_OPEN
DUP_VAL_ON_INDEX
INVALID_CURSOR
INVALID_NUMBER
LOGIN_DENIED
NO_DATA_FOUND
NOT_LOGGED_ON
PROGRAM_ERROR
ROWTYPE_MISMATCH
SELF_IS_NULL
STORAGE_ERROR
SUBSCRIPT_BEYOND_COUNT
SUBSCRIPT_OUTSIDE_LIMIT
SYS_INVALID_ROWID
TIMEOUT_ON_RESOURCE
TOO_MANY_ROWS
VALUE_ERROR
ZERO_DIVIDE


For a brief description of these exceptions please check this link:
http://www.cs.umbc.edu/help/oracle8/server.815/a67842/06_errs.htm#784

Why Function based index

Who can read this: A basic understanding of indexes, updates and their effects on indexes would help better understand the topic of this document. This article is intended to be an introduction to the topic of Function based index.

Who should not read this: This document is meant to be a basic manual for those who would like to learn what are function based indexes and what magic can they do. So the persons who are already aware of this concepts, please do not waste your time. Also this concept is confined to Oracle database. There are no general topics discussed in this article.

Now let's start with the topic straightaway:

What and why a function based index is used? 

Function based indexes were introduced with Oracle 8i. This was the most coveted features that Oracle came up with its 8i version of database. Traditionally there was no such feature that will use an index if your select statement has a function in it. But with this feature it was possible to use functions such as average, sum and still Oracle will use an index. This is a huge performance gain considering that the data handled by such databases were enormous.

So now you might have got a small idea of what a function based index can do. In a nutshell it is a mechanism whereby frequently accessed functions to a table can be created as a function index and oracle will use the index while such a query is fired. Isn't this a huge performance bonus?

A function based index is created just the same way as the normal index is created. Yes it is with the CREATE INDEX statement that the function based index is created. The following example is borrowed from Tom Kyte (see references for the URL):

SQL> create table emp as select * from scott.emp;
Table created.

SQL> update emp set ename = initcap(ename);
14 rows updated.

SQL> commit;
Commit complete.

SQL> create index emp_upper_idx on emp(upper(ename));
Index created.

In the above section of code, first we are creating a table with the same copy as that of scott.emp, and updating the values with InitCaps. Once the data is committed, the index for UPPER function is created. There is no difference in the clause rather than we can use functions here. Now if any query from an application or from the user with case insensitive queries will use this index.

SQL> set autotrace on explain
SQL> select ename, empno, sal from emp where upper(ename) = 'KING';

ENAME           EMPNO        SAL
---------- ---------- ----------
King             7839       5000

This is the execution plan of the select query:

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=1 Card=1 Bytes=40)
   1    0   TABLE ACCESS (BY INDEX ROWID) OF 'EMP' (Cost=1 Card=1 Bytes=40)
   2    1     INDEX (RANGE SCAN) OF 'EMP_UPPER_IDX' (NON-UNIQUE) (Cost=1 Card=1)

It is visible that the INDEX (RANGE SCAN) is used in this case rather than a FULL TABLE SCAN.

This feature becomes more powerful when you consider the fact that user-defined functions could be used as indexes to tables. This will make the application work faster. An example of such a feature is available in the URL provided for Kyte's article.

Privileges required for creating a function based index

1. You must have the query rewrite privilege for your schema

2. You must have the global query rewrite privilege for creating in other schemas

3. If the optimizer has to use a function based index, the following parameters must be set:

    a. QUERY_REWRITE_ENABLED=TRUE

    b. QUERY_REWRITE_INTEGRITY=TRUSTED (These are modifyable through ALTER SYSTEM, ALTER SESSION statements and as well through init.ora file)

Where not to use function based index

As it is with index, the same criteria holds good for function based index. 

References:

Ask Tom: http://asktom.oracle.com/tkyte/article1/

Oracle-Base: http://www.oracle-base.com/articles/8i/FunctionBasedIndexes.php

How to design a database - The untold way

Rule #1
The design of a database must only be thought after the through analysis of the user requirements.

Rule #2
Never implement any functionalities of Oracle if it is not required. Always adhere Rule #1.

Rule #3
Never implement as per theory, but base your design on practical thought. Always adhere Rule #2.

What does this mean?
I will elaborate each and every point mentioned above. Also note that all the rules are related to one another. This ensures that the design is always as per user requirements.

Rule #1
The first rule says that never start or think of database design before understanding what exactly the user requires. This is the major and very important rule which must be adhered, whatever comes.

This rule thus verifies the importance of analysis of user requirements. Unless the requirements of user is fully understood, it will be nearly impossible to design the database.

Rule #2
This rule again double verifies the rule #1. The design must be using only using such oracle functionalities which are exactly required. Otherwise if a new feature is used without additional thought on it, it will affect the application performance and also may throw the application as useless.

For example, for an order entry screen there is one master-detail table identified as per user requirements. The normal way of going for design for such a scenario is for designing two tables with parent key (primary key) and child keys (foreign keys). But if we design the same structure say with Nested tables, it will affect the way the data is stored. It will also be very difficult for MIS applications which require just the child tables.

Rule #3
This rule reminds of the theory we learn when studying the database concepts. For example Normalization. We have all learned that all tables must be normalized. And normally we all adhere to this rule. But this is just a guideline rather than a rule.

For example in a small application (it is left to the user about what a small application is and what is a large application) the performance might not be of a big effect. But in large applications, this will prove to be fatal. We will take the case of normalization itself. If all the tables are normalized then oracle will be storing data in more than one tables for a transaction. This will cause more overhead for the database to fetch the data, maintain index storage, maintain data storage etc.

It is not though meant that normalization is bad. But normalization of tables which can be avoided should not be normalized. Do not think of your database design in which normal form, wheras think how well it can help the user in maintaining their data.

Thus the database design cirulates around these:
1. Collect necessary user requirements
2. Only after collecting user requirements think of database design
3. Design the database in such a way that it is user oriented (data oriented) rather than theory oriented
4. You need not use all the features that oracle offers.

This ends part 1 of Tuning guide. Next parts will be updated soon. Watch this space.

Oracle Index and Like clause

Topic: Beginners Level

From time immemorial there has been debate over the usage of like clause and its association (or rather non-association with index). What is the fuzz all about? Let's check here.

Like clause allows you to use wildcard character searches over data stored in oracle database. By this means we can do pattern searching over the existing data.

For example:
You have a daily meeting calendar in which the attendees names are stored in comma seperated values in a VARCHAR2(2000) field. You want to search on what days a particular attendee say for instance Rose has attended the meeting.

table: meeting_schedule
fields:
meeting_date date
meeting_place varchar2(200)
meeting_attendees varchar2(2000)

In such a case of searching, without the usage of wildcard characters such as % will not yeild appropriate results.

The query for such a situation would be:

SELECT meeting_date, meeting_place
FROM meeting_schedule
WHERE meeting_attendees like '%Rose%';

Now the above query will list the meeting_date and meeting_place where Rose has attended. Of course this is a simple usage of LIKE clause.

The Like clause is valid to be used in any DML statement such as SELECT, INSERT, UPDATE and DELETE.

Now what is that links LIKE clause so much with the indexes in Oracle?
Indexes as you may know are like indexes in the book where it points to a page in the book. The same way Oracle indexes point to a data location. Both Indexes and Tables contain two different copies of data, but Indexes contain far less data compared to the tables. Think of Indexes as containing distinct values of a row in a table. So if there are more and more repetitive rows in a row, then accessing Index will prove to be lot faster than accessing the table.

So by this you must have understood where an Index would help. An Index should be used only in such cases, so that it can improve performance. If otherwise used in all places it will degrade the performance.

How can an Index degrade the performance?
By using an index side-by-side of a table, we are agreeing that it has some extra work to do. Like at every insert, update or delete to the table, the index also needs to be updated. This extra work can prove fatal if there are lot of rows in the index.

So as a simple way to know where to use index, if each row of a table is different from that of other row and there are no distinct values do not use indexes. Only use indexes if atleast there are 25% distinct rows available in the table.

Of course there is no such hard core rule as to when to use Index and when not to use. It just depends upon the situation.

Step by step guide to create XML based report

I have never tried this kind of reporting before. Mail from a friend prompted me that I should start creating this report and also publish an article which shows a step-by-step process of creating one.

 

So here is the list that we are going to achieve:

 

  1. Create a DTD file
  2. Create an XML file
  3. Create a report based on the two files created.

 

First step is to create a DTD file.

 

Our XML file contains details of all database environments in use by a team.

 

The following is our DTD file. Save the following contents as c:\database.dtd:

 

<?xml version="1.0" encoding="UTF-8"?>

<!ELEMENT DatabaseInventory (DatabaseName+)>

<!ELEMENT DatabaseName (   GlobalDatabaseName

                         , OracleSID

                         , DatabaseDomain

                         , Administrator+

                         , DatabaseAttributes

                         , Comments)

> 

<!ELEMENT GlobalDatabaseName (#PCDATA)>

<!ELEMENT OracleSID          (#PCDATA)>

<!ELEMENT DatabaseDomain     (#PCDATA)>

<!ELEMENT Administrator      (#PCDATA)>

<!ELEMENT DatabaseAttributes EMPTY>

<!ELEMENT Comments           (#PCDATA)>

 

<!ATTLIST Administrator       EmailAlias CDATA #REQUIRED>

<!ATTLIST Administrator       Extension  CDATA #IMPLIED>

<!ATTLIST DatabaseAttributes  Type       (Production|Development|Testing) #REQUIRED>

<!ATTLIST DatabaseAttributes  Version    (7|8|8i|9i) "9i">

 

<!ENTITY AUTHOR "Anantha Narayanan">

<!ENTITY WEB    "http://askanantha.blogspot.com">

 

The contents of the structure is as follows:

  1. DatabaseInventory (Row of type DatabaseName)
  2. DatabaseName is a type of fields (GlobalDatabaseName, OracleSID, DatabaseDomain, Administrator+, DatabaseAttributes, Comments)
  3. GlobalDatabaseName is a field of type PCDATA
  4. OracleSID is a field of type PCDATA
  5. DatabaseDomain is a field of type PCDATA
  6. Administrator is a field of type PCDATA
  7. DatabaseAttributes is EMPTY by default
  8. Comments is a field of type PCDATA

 

In the ATTLIST tag, we are defining the properties of the fields of type PCDATA.

 

Now the second step is to create XML file as below. Save the following contents as database.xml:

<?xml version="1.0"?>

<!DOCTYPE DatabaseInventory SYSTEM "db.dtd">

 

<DatabaseInventory>

 

  <DatabaseName>

    <GlobalDatabaseName>mydb.myspace.com</GlobalDatabaseName>

    <OracleSID>production</OracleSID>

    <DatabaseDomain>myspace.com</DatabaseDomain>

    <Administrator EmailAlias="tlananthu" Extension="2354">Anantha</Administrator>

    <DatabaseAttributes Type="Production" Version="10g"/>

    <Comments>

      This is production database.

    </Comments>

  </DatabaseName>

 

  <DatabaseName>

    <GlobalDatabaseName>prod.myspace.com</GlobalDatabaseName>

    <OracleSID>development</OracleSID>

    <DatabaseDomain>myspace.com</DatabaseDomain>

    <Administrator EmailAlias="tlananthu" Extension="2354">Anantha</Administrator>

    <DatabaseAttributes Type="Production" Version="10g"/>

    <Comments>

      This is development database.

    </Comments>

  </DatabaseName>

 

  <DatabaseName>

    <GlobalDatabaseName>test.myspace.com</GlobalDatabaseName>

    <OracleSID>testing</OracleSID>

    <DatabaseDomain>myspace.com</DatabaseDomain>

    <Administrator EmailAlias="tlananthu" Extension="2354">Anantha</Administrator>

    <DatabaseAttributes Type="Production" Version="10g"/>

    <Comments>

      This is testing database.

    </Comments>

  </DatabaseName>

 

</DatabaseInventory>

 

Now start Report Builder.

 

  1. Create a New report using the wizard
  2. Click next on Welcome wizard
  3. Select Create both Web and Paper Layout option and click Next
  4. Choose a Tabular report
  5. Choose XML Query
  6. In the Query Source definition choose Query Definition button
  7. In the Define XML Query choose the database.dtd and choose select columns button
    1. Choose all columns
  8. In the Data source field choose the database.xml file and click OK
  9. Click next
  10. Choose all fields and click next
  11. Click next
  12. Edit the labels if necessary.
  13. Click next
  14. Choose a template if necessary
  15. Click Finish.

 

 

Congratulations, you have created a report based on an XML query successfully.

 

Q-quote operator introduced in Oracle 10g

-----------------------------------------------------------------------

-- Document: Q-quoting mechanism

-- Created On: 17-Dec-07

-- Author: Amar Kumar Padhi

-- Email: amar.padhi@gmail.com

-- Purpose: Q-quote operator introduced in Oracle 10g.

-----------------------------------------------------------------------

 

We make use of single quotation mark in SQL and PL/SQL to identify sting literals. If the literal itself contains a single quote, we need to add one more quote next to it. This additional quote acts as an escape character and removes conflict with the outside quotes that are enclosing the string.

 

Oracle realises that long complex strings having lot of single quotes can turn out to become cumbersome and prone to errors that may not be caught during testing.

 

Release 10g onwards, a new quoting mechanism is provided in the form of "q". This new quote operator allows us to choose our own quotation mark delimiter.

 

Here are some examples -

SQL> select 'amar's web blog. It's personal..' str from dual;

select 'amar's web blog. It's personal..' str from dual

*

ERROR at line 1:

ORA-00923: FROM keyword not found where expected

 

What we normally do:-

SQL> select 'amar''s web blog. It''s personal..' str from dual;

STR

--------------------------------

amar's web blog. It's personal..

1 row selected.

 

Here is use of Q - quote operator. The above statement can also be represented as any one of the below.

 

select q'(amar's web blog. It's personal.)' str from dual;

select q'[amar's web blog. It's personal.]' str from dual;

select q'Aamar's web blog. It's personal.A' str from dual;

select q'/amar's web blog. It's personal./' str from dual;

select q'Zamar's web blog. It's personal.Z' str from dual;

select q'|amar's web blog. It's personal.|' str from dual;

select q'+amar's web blog. It's personal.+' str from dual;

 

(New added)

select q'zamar's web blog. It's personal.z' str from dual;

 

And so on. After the Q-quote is specified, provide the single quote along with a unique character at the beginning of the string. To close the string, provide the same character followed by the single quote. The single and the provided character form the two character string enclosure.

 

If you decide to use braces, Oracle expects matching enclosures, i.e., Open brackets should be represented by closed brackets at the end of the string. Other characters can be represented as they are at both ends.

 

All we need to take care of is that the last two character delimiter does not show up in the string itself. For instance the below will throw error as the closing characters get repeated within the string.

 

SQL> select q'aamar's web blog. Ita's personal.a' str from dual;

ERROR:

ORA-01756: quoted string not properly terminated

 

The same can be used in PL/SQL also.

SQL> declare

2 l_str varchar2(100) := q'[amar's web blog. Ita's personal.]';

3 begin

4 dbms_output.put_line(l_str);

5 end;

6 /

amar's web blog. Ita's personal.

PL/SQL procedure successfully completed.

 

I recently came across a program that framed dynamic INSERT statement to copy data from one database to another. Unfortunately, one of the record columns had a single quote embedded in the string and this resulted in the program unit to fail in production. This happened in release 9i and the only alternative available was to remove/double the single quotes from the string literal. From release 10g, Q-quote could also be used to prevent this problem. I will simulate the same scenario and try this out.

 

SQL> create table am100(col1 number, col2 varchar2(100));

Table created.

SQL> create table am102(col1 number, col2 varchar2(100));

Table created.

SQL> insert into am100 values(1, q'[amar's web blog. It's personal]');

1 row created.

SQL> insert into am100 values(2, q'[this is a simple string]');

1 row created.

SQL> insert into am100 values(3, q'[this is just another string]');

1 row created.

SQL> select * from am100;

COL1 COL2

---------- ----------------------------------

1 amar's web blog. It's personal

2 this is a simple string

3 this is just another string

3 rows selected.

SQL> commit;

Commit complete.

 

Now I will call a simple routine that will read the data and generate dynamic SQL for inserting into another table.

 

declare

l_str varchar2(4000);

begin

for rec in (select col1, col2 from am100) loop

l_str := 'insert into am102(col1, col2) values (' || rec.col1 ||',''' ||

rec.col2 || ''')';

dbms_output.put_line(l_str);

execute immediate l_str;

end loop;

end;

/

 

The unit errors out as below, because of the single quote mismatch.

insert into am102(col1, col2) values (1,'amar's web blog. It's personal');

begin

*

ERROR at line 1:

ORA-00917: missing comma

ORA-06512: at line 4

 

I will now modify the program unit to add Q-quote. We can also use the REPLACE function to remove or double-up single quote in strings, both options are given below.

 

[] With REPLACE function (doubling the single quote);

declare

l_str varchar2(4000);

begin

for rec in (select col1, replace(col2, '''', '''''') col2 from am100) loop

l_str := 'insert into am102(col1, col2) values ('

|| rec.col1 ||',''' || rec.col2 || ''')';

dbms_output.put_line(l_str);

execute immediate l_str;

end loop;

end;

/

 

Output generated:-

insert into am102(col1, col2) values (1,'amar''s web blog. It''s personal')

insert into am102(col1, col2) values (2,'this is a simple string')

insert into am102(col1, col2) values (3,'this is just another string')

 

[] With Q-quote

declare

l_str varchar2(4000);

Begin

for rec in (select col1, 'q''[' || col2 || ']''' col2 from am100) loop

l_str := 'insert into am102(col1, col2) values ('

|| rec.col1 ||',' || rec.col2 || ')';

dbms_output.put_line( l_str );

execute immediate l_str;

end loop;

end;

/

 

Output generated:-

insert into am102(col1, col2) values (1,q'[amar's web blog. It's personal]')

insert into am102(col1, col2) values (2,q'[this is a simple string]')

insert into am102(col1, col2) values (3,q'[this is just another string]')

 

Q-quote will be a handy option when we deal with huge text literals.