Showing posts with label Theoretical. Show all posts
Showing posts with label Theoretical. Show all posts

Questions on SQL Part2

Part 2 – Questions on SQL and SQL *Plus

 

1. A user is setting up a join operation between tables EMP and DEPT. There are some employees in the EMP table that the user wants returned by the query, but the employees are not assigned to departments yet. Which SELECT statement is most appropriate for this user?

 

A.      select e.empid, d.head from emp e, dept d;

B.      select e.empid, d.head from emp e, dept d where e.dept# = d.dept#;

C.      select e.empid, d.head from emp e, dept d where e.dept# = d.dept# (+);

D.     select e.empid, d.head from emp e, dept d where e.dept# (+) = d.dept#;

 

2. Developer ANJU executes the following statement: CREATE TABLE animals AS SELECT * from MASTER.ANIMALS; What is the effect of this statement?

 

A.      A table named ANIMALS will be created in the MASTER schema with the same data as the ANIMALS table owned by ANJU.

B.      A table named ANJU will be created in the ANIMALS schema with the same data as the ANIMALS table owned by MASTER.

C.      A table named ANIMALS will be created in the ANJU schema with the same data as the ANIMALS table owned by MASTER.

D.     A table named MASTER will be created in the ANIMALS schema with the same data as the ANJU table owned by ANIMALS.

 

3. User JANKO would like to insert a row into the EMPLOYEE table. The table has three columns: EMPID, LASTNAME, and SALARY. The user would like to enter data for EMPID 59694, LASTNAME Harris, but no salary. Which statement would work best?

 

A.      INSERT INTO employee VALUES (59694,'HARRIS', NULL);

B.      INSERT INTO employee VALUES (59694,'HARRIS');

C.      INSERT INTO employee (EMPID, LASTNAME, SALARY) VALUES (59694,'HARRIS');

D.     INSERT INTO employee (SELECT 59694 FROM 'HARRIS');

 

4. Which three of the following are valid database datatypes in Oracle? (Choose three.)

 

A.      CHAR

B.      VARCHAR2

C.      BOOLEAN

D.     NUMBER

 

5. Omitting the WHERE clause from a DELETE statement has which of the following effects?

 

A.      The delete statement will fail because there are no records to delete.

B.      The delete statement will prompt the user to enter criteria for the deletion

C.      The delete statement will fail because of syntax error.

D.     The delete statement will remove all records from the table.

 

6. Dropping a table has which of the following effects on a nonunique index created for the table?

 

A.      No effect.

B.      The index will be dropped.

C.      The index will be rendered invalid.

D.     The index will contain NULL values.

 

7. To increase the number of null-able columns for a table,

 

A.      Use the alter table statement.

B.      Ensure that all column values are NULL for all rows.

C.      First increase the size of adjacent column datatypes, then add the column.

D.    Add the column, populate the column, then add the NOT NULL constraint.

 

8. Which line of the following statement will produce an error?

 

A.      CREATE TABLE goods

B.      (good_no NUMBER,

C.      good_name VARCHAR2 check(good_name in (SELECT name FROM avail_goods)),

D.     CONSTRAINT pk_goods_01

E.      PRIMARY KEY (goodno));

F.      There are no errors in this statement.

 

9. MAXVALUE is a valid parameter for sequence creation.

 

A.      A. TRUE

B.      B. FALSE

 

10. Which function below can best be categorized as similar in function to an IF-THEN-ELSE statement?

 

A.      SQRT

B.      DECODE

C.      NEW_TIME

D.     ROWIDTOCHAR

 

 

 

Answers:

  1. Option D
  2. Option C
  3. Option A
  4. Option A, B and D
  5. Option D
  6. Option B
  7. Option A
  8. Option C
  9. Option A
  10. Option B

 

End of Part 2

Questions on SQL Part1

Part 1 – Questions on SQL and SQL *Plus

 

1. Which of the following statements contains an error?

 

A. SELECT * FROM emp WHERE empid = 493945;

B. SELECT empid FROM emp WHERE empid= 493945;

C. SELECT empid FROM emp;

D. SELECT empid WHERE empid = 56949 AND lastname = 'SMITH';

 

2. Which of the following correctly describes how to specify a column alias?

 

A. Place the alias at the beginning of the statement to describe the table.

B. Place the alias after each column, separated by white space, to describe the column.

C. Place the alias after each column, separated by a comma, to describe the column.

D. Place the alias at the end of the statement to describe the table.

 

3. The NVL function

A. Assists in the distribution of output across multiple columns.

B. Allows the user to specify alternate output for non-null column values.

C. Allows the user to specify alternate output for null column values.

D. Nullifies the value of the column output.

 

4. Output from a table called PLAYS with two columns, PLAY_NAME and AUTHOR, is shown below.

Which of the following SQL statements produced it?

 

PLAY_TABLE
-------------------------------------
"Midsummer Night's Dream", SHAKESPEARE
"Waiting For Godot", BECKETT
"The Glass Menagerie", WILLIAMS

 

A. SELECT play_name || author FROM plays;

B. SELECT play_name, author FROM plays;

C. SELECT play_name||', ' || author FROM plays;

D. SELECT play_name||', ' || author PLAY_TABLE FROM plays;

 

5. Issuing the DEFINE_EDITOR="emacs" will produce which outcome?

 

A. The emacs editor will become the SQL*Plus default text editor.

B. The emacs editor will start running immediately.

C. The emacs editor will no longer be used by SQL*Plus as the default text editor.

D. The emacs editor will be deleted from the system.

 

6. The user issues the following statement. What will be displayed if the EMPID selected is 60494?

 

SELECT DECODE(empid,38475, ‘Terminated’,60494, ‘LOA’, ‘ACTIVE’)

FROM  emp;

 

A. 60494

B. LOA

C. Terminated

D. ACTIVE

 

7. SELECT (TO_CHAR(NVL(SQRT(59483), ‘INVALID’)) FROM DUAL is a valid SQL statement.

 

A. TRUE

B. FALSE

 

8. The appropriate table to use when performing arithmetic calculations on values defined within the SELECT statement (not pulled from a table column) is

 

A. EMP

B. The table containing the column values

C. DUAL

D. An Oracle-defined table

 

9. Which of the following is not a group function?

 

A. avg( )

B. sqrt( )

C. sum( )

D. max( )

 

10. The default character for specifying runtime variables in SELECT statements is

 

A. Ampersand

B. Ellipses

C. Quotation marks

D. Asterisk

 

Answers:

  1. Option D
  2. Option B
  3. Option C
  4. Option D
  5. Option A
  6. Option B
  7. Option B
  8. Option C
  9. Option B
  10. Option A

 

 

To view explanations for the answers of questions go here.

 

Don’t miss the explanations for the answers.

 

End of Part 1

Overview of Logical Database Structures

This section discusses logical storage structures: data blocks, extents, segments, and tablespaces. These logical storage structures enable Oracle Database to have fine-grained control of disk space use.

 

This section includes the following topics:

  • Oracle Database Data Blocks
  • Extents
  • Segments
  • Tablespaces

 

Oracle Database Data Blocks

At the finest level of granularity, Oracle Database data is stored in data blocks. One data block corresponds to a specific number of bytes of physical database space on disk. The standard block size is specified by the DB_BLOCK_SIZE initialization parameter. In addition, you can specify up to four other block sizes. A database uses and allocates free database space in Oracle Database data blocks.

 

Extents

The next level of logical database space is an extent. An extent is a specific number of contiguous data blocks, obtained in a single allocation, used to store a specific type of information.

 

Segments

Above extents, the level of logical database storage is a segment. A segment is a set of extents allocated for a table, index, rollback segment, or for temporary use by a session, transaction, or SQL parser. In relation to physical database structures, all extents belonging to a segment exist in the same tablespace, but they may be in different data files.

When the extents of a segment are full, Oracle Database dynamically allocates another extent for that segment. Because extents are allocated as needed, the extents of a segment may or may not be contiguous on disk.

 

Tablespaces

A database is divided into logical storage units called tablespaces, which group related data blocks, extents, and segments. For example, tablespaces commonly group together all application objects to simplify some administrative operations.

Each database is logically divided into two or more tablespaces. One or more datafiles are explicitly created for each tablespace to physically store the data of all logical structures in a tablespace. The combined size of the datafiles in a tablespace is the total storage capacity of the tablespace.

Every Oracle database contains a SYSTEM tablespace and a SYSAUX tablespace. Oracle Database creates them automatically when the database is created. The system default is to create a smallfile tablespace, which is the traditional type of Oracle tablespace. The SYSTEM and SYSAUX tablespaces are created as smallfile tablespaces.

Oracle Database also lets you create bigfile tablespaces, which are made up of single large file rather than numerous smaller ones. Bigfile tablespaces let Oracle Database utilize the ability of 64-bit systems to create and manage ultralarge files. As a result, Oracle Database can scale up to 8 exabytes in size. With Oracle-Managed Files, bigfile tablespaces make datafiles completely transparent for users. In other words, you can perform operations on tablespaces, rather than the underlying datafiles.

 

Overview of Physical Database Structures

The following sections explain the physical database structures of an Oracle database, including datafiles, control files, redo log files, archive log files, parameter files, alert and trace log files, and backup files.

 

This section includes the following topics:

  • Datafiles
  • Control Files
  • Online Redo Log Files
  • Archived Redo Log Files
  • Parameter Files
  • Alert and Trace Log Files
  • Backup Files

 

Datafiles

Every Oracle database has one or more physical datafiles, which contain all the database data. The data of logical database structures, such as tables and indexes, is physically stored in the datafiles allocated for a database.

 

Datafiles have the characteristics:

  • A datafile can be associated with only one database.
  • Datafiles can be defined to extend automatically when they are full.
  • One or more datafiles form a logical unit of database storage called a tablespace.

 

Data in a datafile is read, as needed, during normal database operation and stored in the memory cache of Oracle Database. For example, if a user wants to access some data in a table of a database, and if the requested information is not already in the memory cache for the database, then it is read from the appropriate datafiles and stored in memory.

 

Modified or new data is not necessarily written to a datafile immediately. To reduce the amount of disk access and to increase performance, data is pooled in memory and written to the appropriate datafiles all at once, as determined by the background process database writer process (DBWn).

 

Datafiles that are stored in temporary tablespaces are called tempfiles. Tempfiles are subject to some restrictions.

 

Control Files

Every Oracle database has a control file. A control file contains entries that specify the physical structure of the database, including the following information:

  • Database name
  • Names and locations of datafiles and redo log files
  • Timestamp of database creation

 

Oracle Database can multiplex the control file, that is, simultaneously maintain a number of identical control file copies, to protect against a failure involving the control file.

 

Every time an instance of an Oracle database is started, its control file identifies the datafiles, tempfiles, and redo log files that must be opened for database operation to proceed. If the physical makeup of the database is altered (for example, if a new datafile or redo log file is created), then the control file is automatically modified by Oracle Database to reflect the change. A control file is also used in database recovery.

 

Online Redo Log Files

Every Oracle Database has a set of two or more online redo log files. These online redo log files, together with archived copies of redo log files, are collectively known as the redo log for the database. A redo log is made up of redo entries (also called redo records), which record all changes made to data. If a failure prevents modified data from being permanently written to the datafiles, then the changes can be obtained from the redo log, so work is never lost.

 

To protect against a failure involving the redo log itself, Oracle Database lets you create a multiplexed redo log so that two or more copies of the redo log can be maintained on different disks.

 

Archived Redo Log Files

When online redo log files are written to disk, they become archived redo log files. Oracle recommends that you enable automatic archiving of the redo log. Oracle Database automatically archives redo log files when the database is in ARCHIVELOG mode.

 

Parameter Files

Parameter files contain a list of configuration parameters for that instance and database. Both parameter files (pfiles) and server parameter files (spfiles) let you store and manage your initialization parameters persistently in a server-side disk file. A server parameter file has these additional advantages:

  • The file is concurrently updated when some parameter values are changed in the active instance.
  • The file is centrally located for access by all instance in a Real Application Services database.

Oracle recommends that you create a server parameter file as a dynamic means of maintaining initialization parameters.

 

Alert and Trace Log Files

Each server and background process can write to an associated trace file. When an internal error is detected by a process, the process dumps information about the error to its trace file. Some of the information written to a trace file is intended for the database administrator, while other information is for Oracle Support Services. Trace file information is also used to tune applications and instances. The alert file, or alert log, is a special trace file. The alert log of a database is a chronological log of messages and errors.

The following features provide automation and assistance in the collection and interpretation of trace and alert file information:

  • The Automatic Diagnostic Repository (ADR) is a system-managed repository for storing and organizing trace files and other error diagnostic data. ADR provides a comprehensive view of all the critical errors encountered by the database and maintains all relevant data needed for problem diagnosis and eventual resolution. When the same type of incident occurs too frequently, ADR performs flood control to avoid excessive dumping of diagnostic information.
  • The Incident Packaging Service (IPS) extracts diagnostic and test case data associated with critical errors from the ADR and packages the data for transport to Oracle.

 

Backup Files

To restore a file is to replace it with a backup file. Typically, you restore a file when a media failure or user error has damaged or deleted the original file.

User-managed backup and recovery requires you to actually restore backup files before you can perform a trial recovery of the backups.

Server-managed backup and recovery manages the backup process, such as scheduling of backups, as well as the recovery process, such as applying the correct backup file when recovery is needed.

What is Atomicity

The phrase "all or nothing" succinctly describes the first ACID property of atomicity. When an update occurs to a database, either all or none of the update becomes available to anyone beyond the user or application performing the update. This update to the database is called a transaction and it either commits or aborts. This means that only a fragment of the update cannot be placed into the database, should a problem occur with either the hardware or the software involved. Features to consider for atomicity:

 

 

  • A transaction is a unit of operation - either all the transaction's actions are completed or none are

 

  • Atomicity is maintained in the presence of deadlocks

 

  • Atomicity is maintained in the presence of database software failures

 

  • Atomicity is maintained in the presence of application software failures

 

  • Atomicity is maintained in the presence of CPU failures

 

  • Atomicity is maintained in the presence of disk failures

 

  • Atomicity can be turned off at the system level

 

  • Atomicity can be turned off at the session level

 

 

Oracle architecture v DB2 architecture

Author: David (DBA)

This article makes direct comparison between the Oracle architecture (instance, databases, physical files, network, configuration files, etc) and the DB2 architecture thus allowing Oracle DBAs to understand the similarities and differences between Oracle and DB2.

Because of its comparative nature (showing you how the same things work in DB2 compared to Oracle) I think its a great first read if you are an Oracle DBA new to DB2.

I do have some comments I would like to add:

1) IBM says:

DB2 does contain a binary file known as the system database directory that contains entries of all the databases you can connect from your DB2 machine.

This compares best to Oracle tnsnames.ora file. I don't understand why IBM didn't draw this comparison.

2) IBM says:

Every Oracle database contains a table space named SYSTEM, which Oracle creates automatically when the database is created. Other table spaces for user, temporary and index data need to be created after the database has been created, and a user needs to be assigned to these table spaces before they can be used.

While I get IBMs point (that you have to create tablespaces manually after the create database command) This is not technically true. Frist, when you issue a create database command in Oracle you explicitly specify the creation of a temp tablesapce. Second, after the database creation all other tablespaces you have created can be used without being "assigned to users" simply by using them in the storage clause of the object creation statement.

3) IBM Says:

DB2 table spaces can be classified as SMS (system-managed spaces) or DMS (database-managed spaces). SMS table spaces are managed by the operating system and can only be directories. They grow automatically as needed; thus SMS provides good performance with minimum administration.

Oracle does not have the SMS concept for its storage model but its data files are similar to DB2 DMS table spaces.

This is not entirely true. There is something similar (but not completely) in Oracle known as OMF(9i)/ASSM(10g). I will not go into detail about what these features do, but I will say that while OMF / ASSM files are managed by Oracle and no the Operating System, they do simplify the administration of tablespaces to an almost automatic level.

4) IBM Says:

As indicated earlier, Oracle's data buffer concept is equivalent to DB2's bufferpool; however, DB2 allows for multiple bufferpools to exist. There is no predefined number of bufferpools that you can create, and they can have any name.

Again, not entirely true. While less flexible, Oracle 9i/10g allows for multiple db buffers with different block size. The limitation is one buffer per one block size while in DB2 it is one buffer per tablespace.

Data warehouse v Database

Author: Nishith


Data warehouse vs. database is a long and old debate. In this article I am going to discuss the difference between data warehouse and database. I will also discuss that which one is better to use.

Similarities in Database and Data warehouse:

  • Both database and data warehouse are databases.
  • Both database and data warehouse have some tables containing data.
  • Both database and data warehouse have indexes, keys, views etc.

Difference between Database and Data warehouse:

So is that ‘Data warehouse' really different from the tables in you application? And if the two aren't really different, maybe you can just run your queries and reports directly from your application databases!

Well, to be fair, that may be just what you are doing right now, running some end-of-day reports as complex SQL queries and shipping them off to those who need them. And this scheme might just be serving you fine right now. Nothing wrong with that if it works for you.

But before you start patting yourself on the back for having avoided a data warehouse altogether, do spend a moment to understand the differences, and to appreciate the pros and cons of either approach.

  • The Application database is not your Data Warehouse for the simple reason that your application database is never designed to answer queries.

  • The database is designed and optimized to record while the data warehouse is designed and optimized to respond to analysis questions that are critical for your business.

  • Application databases are On-Line Transaction processing systems where every transition has to be recorded, and super-fast at that.

  • A Data Warehouse on the other hand is a database that is designed for facilitating querying and analysis.

  • A data warehouse is designed as On-Line Analytical processing systems . A data warehouse contains read-only data that can be queried and analyzed far more efficiently as compared to regular OLTP application databases. In this sense an OLAP system is designed to be read-optimized.

  • Separation from your application database also ensures that your business intelligence solution is scalable, better documented and managed and can answer questions far more efficiently and frequently.

Data warehouse is better than a database:

The Data Warehouse is the foundation of any analytics initiative. You take data from various data sources in the organization, clean and pre-process it to fit business needs, and then load it into the data warehouse for everyone to use. This process is called ETL which stands for ‘Extract, transform, and load'.

Suppose you are running your reports off the main application database. Now the question is would the solution still work next year with 20% more customers, 50% more business, 70% more users, and 300% more reports? What about the year after next? If you are sure that your solution will run without any changes, great!! However, if you have already budgeted to buy new state-of-the-art hardware and 25 new Oracle licenses with those partition-options and the 33 other cool-sounding features, you might consider calling up Oracle and letting them know. There's a good chance they'd make you their brand ambassador.

Creation of a data warehouse leads to a direct increase in quality of analyses as you can keep only the needed information in simpler tables, standardized, and denormalized to reduce the linkages between tables and the corresponding complexity of queries.

A data warehouse drastically reduces the cost-per-analysis and thus permits more analysis per FTE.

It's probably more sensible and simpler to create a new data warehouse exclusively for your needs. And if you are cash strapped, you could easily do that at extremely low costs by using excellent open source databases like MySQL.

Difference between unique index and unique constraints-Oracle

A constraint is defined by Oracle as a declarative way to define a business rule for a column of a table. An integrity constraint is a statement about a table's data that is always true.

The difference between a unique index and a unique key/primary key constraint starts with the fact that the constraint is a rule while the index is a database object that is used to provide improved performance in the retrieval of rows from a table. It is a physical object that takes space and is created with the DDL command: create index or as part of a create table with primary key or unique key constraints or an alter table command that adds these constraints (see SQL Manual).

Briefly the constraints are:
Not Null - Column value must be present.
Unique Key - Column(s) value(s) must be unique in table or null (see note below).
Primary Key - Unique key + Not Null which equates to every column in the key must have a value and this value is unique so the primary key uniquely identifies each and every row in the table.
Foreign Key - Restricts values in a table column to being a value found in the primary key or unique key Constraint on the referenced table (parent/child relationship).
Check - Tests the column value against an expression (rule).

Technically it would be possible for Oracle, to support primary key and unique key constraints without using an index at all. In the case of a unique key or primary key constraint Oracle could perform a full table scan to check for the presence of a key value before performing the insert but the performance cost of doing this for anything other than a very small table would be excessive probably rendering Oracle useless.

Prior to Oracle 8 if you defined a primary key or a unique key constraint the Oracle RDBMS would create a unique index to support enforcement of the constraint. If an index already existed on the constrained columns Oracle would use it rather than define another index on the same columns.

Starting with Oracle version 8 Oracle has the ability to enforce primary key and unique key constraints using non-unique indexes. The use of non-unique indexes supports deferring enforcement of the constraint until transaction commit time if the constraint is defined at creation time as deferrable. Also starting with version 8 Oracle has the ability to place constraints on tables where the existing data does not meet the requirements imposed by the constraint through use of a novalidate option (see SQL Manual).

The practical difference between using a unique index to support data integrity and a unique key or primary key on the same columns since Oracle will build an index to support the constraint if you do not is that you can define foreign key constraints when the primary key or unique key constraint exist. Also in the case of a primary key constraint Oracle will convert the columns in the constraint to be not null constrained when it is added to meet the primary key requirement to uniquely identify each and every row in the table.

There is no such restriction on a unique index. The primary key and unique key constraints along with foreign key constraints that reference them also provide a form of documentation on the relationships between objects.

The Oracle RDBMS Data Dictionary views All/DBA/USER_CONSTRAINTS and ALL/DBA/USER_CONS_COLUMNS may be used to locate constraints on a table and the columns being constrained.

If you drop or disable a primary key or unique key constraint that is supported by a unique index the index is dropped with the constraint. If a non-unique index is used to support the constraint the index is not dropped with the constraint. This second condition is effective only with version 8 and higher.

Note – Unique key constraints allow the constrained column to be NULL. Nulls values are considered to be valid and do not violate the constraint.

Distinguish SYS and SYS as SYSDBA

Have you ever wondered after connecting to the database as SYS you wanted to check whether you are connected as regular user SYS or as SYS AS SYSDBA? Well the information is available in view V$SESSION_CONNECT_INFO. The following are the fields available in this view:

 

Name                       Type

------------------------   ------------------

SID                         NUMBER

AUTHENTICATION_TYPE         VARCHAR2(26)

OSUSER                      VARCHAR2(30)

NETWORK_SERVICE_BANNER     VARCHAR2(4000)

 

 

You can execute the following query to know the connection type:

SQL> SELECT sid, authentication_type, osuser FROM v$session_connect_info WHERE sid IN

      (SELECT sid FROM v$session WHERE username=USER);

SID    AUTHENTI      OSUSER
----   --------      --------------
4      OS            Administrator
5      DATABASE      Administrator


If the authentication type is DATABASE, you logged in as the regular SYS user, but if the type is OS, it means you logged in as SYS AS SYSDBA.

 

The following five operations on Oracle require the user to have SYSDBA privileges in order to perform the operation:

  • startup a database,
  • shutdown a database,
  • backup a database,
  • recover a database and
  • create a database

V$PWFILE_USERS view lists all users who have been granted SYSDBA or sysoper privileges. The SYSDBA privilege can not be granted to public.

 

Note, SYSDBA is not a role, it is a privilege. You'll find it in system_privilege_map, not in dba_roles.

 

Anytime, someone connects as SYSDBA, it turns out it's being SYS. That is, if SYSDBA is granted to JOHN and John connects as SYSDBA and select user from dual, it reveals he's actually SYS.

 

SYS is also special in that it is not possible to create a trigger in the sys schema. Also, a logon trigger is not executed when sys connects to the database.

 

 

Program Global Area (PGA)

A program global area (PGA) is a memory region that stores the data and control information for the server processes. Each Server process has a non-shared memory created by Oracle when a server process is started. Access to the PGA is exclusive to that server process and it is read and written only by Oracle code acting on its behalf. Broadly speaking, PGA contains a private SQL area and a Session memory area.

A private SQL area contains data such as bind information and runtime memory structures. Each session that issues a SQL statement has a private SQL area.

Note that the Location of a private SQL area depends on the type of connection established for a session. If a session is connected through a dedicated server, private SQL areas are located in the server process’s PGA. However, if a session is connected through a shared server, part of the private SQL area is kept in the SGA.

Session memory is the memory allocated to hold a session’s variables (logon information) and other information related to the session. For a shared server, the session memory is shared and not private.

---

The above text is an excerpt from:

Oracle 10g Grid & Real Application Clusters
Oracle 10g Grid Computing with RAC
ISBN 0-9744355-4-6

by Mike Ault, Madhu Tumma

Oracle HTTP Server

The Oracle HTTP Server(OHS) provides key infrastructure for serving the Internet's HTTP protocol. OHS is used to return responses for both process to process and human generated requests from browsers. Key aspects of OHS are its technology, its serving of both static and dynamic content and its integration with both Oracle and non-Oracle products.

Technology - OHS is based on the proven, open source technology of both Apache 1.3 and Apache 2.0. OHS versions based on Apache 2.0 now provide the ability to accommodate the newest version of the Internet Protocol, IPv6. OHS based on Apache 2.0 is available as a standalone product off the Oracle Application Server 10g Companion CD. In addition OHS now provides, via the open source product mod_security, an application firewall capability.

Static and Dynamic Content - OHS serves static content directly or via standard interfaces such as WebDAV standard. Great flexibility is provided in dynamic content generation and many languages, such as Java, C/C++, Perl, PHP and PLSQL are provided for content generation.

Integration - While OHS has standalone deployment options, it can also be deployed in a highly integrated manner with Oracle clustering, monitoring, Single Sign On or Web Caching technology. In addition, Oracle offers plug-ins (Proxy, OC4J, and OSSO) for integration of the Oracle Application Server with non-Oracle HTTP Servers. These plug-ins are available off the Oracle Application Server 10g Companion CD.

Retrieved from: http://www.oracle.com/technology/products/ias/ohs/index.html