Mohamed Azar Shaj

Oracle DBA, azarmohds@gmail.com, Riyadh Kingdom Of Saudi Arabia

Oracle Certification ** Oracle-Base ** Export and Import ** Charater set migration

About Me

My photo
Database Administrator and like to read about hackers and them activities...and completed Oracle 10g Certified associate. Active member of Oracle Forum

Export and import user level data

5:25 PM / Posted by Azar Mohamed Shaj / comments (0)

Export :

exp user/user@orcl file=d:\backup\test.dmp log=d:\backup\test.log direct=y consistent=y "statistics="none"


Import :

imp system/manager@test fromuser=user touser=user file=d:\backup\test.dmp log=d:\backup\test_imp.log ignore=y

Disk Driver Full

2:21 PM / Posted by Azar Mohamed Shaj / comments (0)

Disk Driver Full

My disk driver fulled so that i did follow steps..

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options

SQL> create tablespace ts datafile 'c:\ts.dbf' size 1m;

Tablespace created.

SQL> select * from v$tablespace;

TS# NAME INC BIG FLA ENC
---------- ------------------------------ --- --- --- ---
0 SYSTEM YES NO YES
1 UNDOTBS1 YES NO YES
2 SYSAUX YES NO YES
4 USERS YES NO YES
3 TEMP NO NO YES
6 TS YES NO YES

6 rows selected.

SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------

C:\ORACLE\PRODUCT\10.2.0\ORADATA\DADA\SYSTEM01.DBF
C:\ORACLE\PRODUCT\10.2.0\ORADATA\DADA\UNDOTBS01.DBF
C:\ORACLE\PRODUCT\10.2.0\ORADATA\DADA\SYSAUX01.DBF
C:\ORACLE\PRODUCT\10.2.0\ORADATA\DADA\USERS01.DBF
C:\TS.DBF

SQL> alter tablespace ts offline;

Tablespace altered.


SQL> host
Microsoft Windows XP [Version 5.1.2600]
(C) Copyright 1985-2001 Microsoft Corp.

C:\Documents and Settings\Administrator>move c:\TS.DBF c:\TS1.dbf

C:\Documents and Settings\Administrator>exit

SQL> alter tablespace ts rename datafile 'c:\ts.dbf' to 'c:\ts1.dbf';

Tablespace altered.

SQL> alter tablespace ts online;

Tablespace altered.

SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------

C:\ORACLE\PRODUCT\10.2.0\ORADATA\DADA\SYSTEM01.DBF
C:\ORACLE\PRODUCT\10.2.0\ORADATA\DADA\UNDOTBS01.DBF
C:\ORACLE\PRODUCT\10.2.0\ORADATA\DADA\SYSAUX01.DBF
C:\ORACLE\PRODUCT\10.2.0\ORADATA\DADA\USERS01.DBF
C:\TS1.DBF

SQL>

ORA-32004: obsolete and/or deprecated parameter(s) specified

1:40 PM / Posted by Azar Mohamed Shaj / comments (0)

ORA-32004: obsolete and/or deprecated parameter(s) specified


Whenever i change mode from noarchive to archive log mode, i got above error.i am looking why this errors occured finally i realized and resolved that problem

The problem occur due to this below reason

sql> archive log start

the above commands not need to run for oracle 10g version.bcz parameter LOG_ARCHIVE_START is obsolete in 10g.

Tuning Oracle's Buffer Cache

1:06 PM / Posted by Azar Mohamed Shaj / comments (0)

Tuning Oracle's Buffer Cache
Roger Schrag, Database Specialists, Inc.
http://www.dbspecialists.com

Introduction

Oracle maintains its own buffer cache inside the system global area (SGA) for each instance. A properly sized buffer cache can usually yield a cache hit ratio over 90%, meaning that nine requests out of ten are satisfied without going to disk.

If a buffer cache is too small, the cache hit ratio will be small and more physical disk I/O will result. If a buffer cache is too big, then parts of the buffer cache will be under-utilized and memory resources will be wasted.

Checking The Cache Hit Ratio

Oracle maintains statistics of buffer cache hits and misses. The following query will show you the overall buffer cache hit ratio for the entire instance since it was started:

SELECT (P1.value + P2.value - P3.value) / (P1.value + P2.value)
FROM v$sysstat P1, v$sysstat P2, v$sysstat P3
WHERE P1.name = 'db block gets'
AND P2.name = 'consistent gets'
AND P3.name = 'physical reads'

You can also see the buffer cache hit ratio for one specific session since that session started:

SELECT (P1.value + P2.value - P3.value) / (P1.value + P2.value)
FROM v$sesstat P1, v$statname N1, v$sesstat P2, v$statname N2,
v$sesstat P3, v$statname N3
WHERE N1.name = 'db block gets'
AND P1.statistic# = N1.statistic#
AND P1.sid =
AND N2.name = 'consistent gets'
AND P2.statistic# = N2.statistic#
AND P2.sid = P1.sid
AND N3.name = 'physical reads'
AND P3.statistic# = N3.statistic#
AND P3.sid = P1.sid

You can also measure the buffer cache hit ratio between time X and time Y by collecting statistics at times X and Y and computing the deltas.

Adjusting The Size Of The Buffer Cache

The db_block_buffers parameter in the parameter file determines the size of the buffer cache for the instance. The size of the buffer cache (in bytes) is equal to the value of the db_block_buffers parameter multiplied by the data block size.

You can change the size of the buffer cache by editing the db_block_buffers parameter in the parameter file and restarting the instance.

Determining If The Buffer Cache Should Be Enlarged

If you set the db_block_lru_extended_statistics parameter to a positive number in the parameter file for an instance and restart the instance, Oracle will populate a dynamic performance view called v$recent_bucket. This view will contain the same number of rows as the setting of the db_block_lru_extended_statistics parameter. Each row will indicate how many additional buffer cache hits there might have been if the buffer cache were that much bigger.

For example, if you set db_block_lru_extended_statistics to 1000 and restart the instance, you can see how the buffer cache hit ratio would have improved if the buffer cache were one buffer bigger, two buffers bigger, and so on up to 1000 buffers bigger than its current size. Following is a query you can use, along with a sample result:

SELECT 250 * TRUNC (rownum / 250) + 1 ' to '
250 * (TRUNC (rownum / 250) + 1) "Interval",
SUM (count) "Buffer Cache Hits"
FROM v$recent_bucket
GROUP BY TRUNC (rownum / 250)

Interval Buffer Cache Hits
--------------- --------------------
1 to 250 16083
251 to 500 11422
501 to 750 683
751 to 1000 177

This result set shows that enlarging the buffer cache by 250 buffers would have resulted in 16,083 more hits. If there were about 30,000 hits in the buffer cache at the time this query was performed, then it would appear that adding 500 buffers to the buffer cache might be worthwhile. Adding more than 500 buffers might lead to under-utilized buffers and therefore wasted memory.

There is overhead involved in collecting extended LRU statistics. Therefore you should set the db_block_lru_extended_ statistics parameter back to zero as soon as your analysis is complete.

In Oracle7, the v$recent_bucket view was named X$KCBRBH. Only the SYS user can query X$KCBRBH. Also note that in X$KCBRBH the columns are called indx and count, instead of rownum and count.

Determining If The Buffer Cache Is Bigger Than Necessary

If you set the db_block_lru_statistics parameter to true in the parameter file for an instance and restart the instance, Oracle will populate a dynamic performance view called v$current_bucket. This view will contain one row for each buffer in the buffer cache, and each row will indicate how many of the overall cache hits have been attributable to that particular buffer.

By querying v$current_bucket with a GROUP BY clause, you can get an idea of how well the buffer cache would perform if it were smaller. Following is a query you can use, along with a sample result:

SELECT 1000 * TRUNC (rownum / 1000) + 1 ' to '
1000 * (TRUNC (rownum / 1000) + 1) "Interval",
SUM (count) "Buffer Cache Hits"
FROM v$current_bucket
WHERE rownum > 0
GROUP BY TRUNC (rownum / 1000)

Interval Buffer Cache Hits
------------ -----------------
1 to 1000 668415
1001 to 2000 281760
2001 to 3000 166940
3001 to 4000 14770
4001 to 5000 7030
5001 to 6000 959

This result set shows that the first 3000 buffers are responsible for over 98% of the hits in the buffer cache. This suggests that the buffer cache would be almost as effective if it were half the size; memory is being wasted on an oversized buffer cache.

There is overhead involved in collecting LRU statistics. Therefore you should set the db_block_lru_statistics parameter back to false as soon as your analysis is complete.

In Oracle7, the v$current_bucket view was named X$KCBCBH. Only the SYS user can query X$KCBCBH. Also note that in X$KCBCBH the columns are called indx and count, instead of rownum and count.

Full Table Scans

When Oracle performs a full table scan of a large table, the blocks are read into the buffer cache but placed at the least recently used end of the LRU list. This causes the blocks to be aged out quickly, and prevents one large full table scan from wiping out the entire buffer cache.

Full table scans of large tables usually result in physical disk reads and a lower buffer cache hit ratio. You can get an idea of full table scan activity at the data file level by querying v$filestat and joining to SYS.dba_data_files. Following is a query you can use and sample results:

SELECT A.file_name, B.phyrds, B.phyblkrd
FROM SYS.dba_data_files A, v$filestat B
WHERE B.file# = A.file_id
ORDER BY A.file_id

FILE_NAME PHYRDS PHYBLKRD
-------------------------------- ---------- ----------
/u01/oradata/PROD/system01.dbf 92832 130721
/u02/oradata/PROD/temp01.dbf 1136 7825
/u01/oradata/PROD/tools01.dbf 7994 8002
/u01/oradata/PROD/users01.dbf 214 214
/u03/oradata/PROD/rbs01.dbf 20518 20518
/u04/oradata/PROD/data01.dbf 593336 9441037
/u05/oradata/PROD/data02.dbf 4638037 4703454
/u06/oradata/PROD/index01.dbf 1007638 1007638
/u07/oradata/PROD/index02.dbf 1408270 1408270

PHYRDS shows the number of reads from the data file since the instance was started. PHYBLKRD shows the actual number of data blocks read. Usually blocks are requested one at a time. However, Oracle requests blocks in batches when performing full table scans. (The db_file_multiblock_read_count parameter controls this batch size.)

In the sample result set above, there appears to be quite a bit of full table scan activity in the data01.dbf data file, since 593,336 read requests have resulted in 9,441,037 actual blocks read.

Spotting I/O Intensive SQL Statements

The v$sqlarea dynamic performance view contains one row for each SQL statement currently in the shared SQL area of the SGA for the instance. v$sqlarea shows the first 1000 bytes of each SQL statement, along with various statistics. Following is a query you can use:

SELECT executions, buffer_gets, disk_reads,
first_load_time, sql_text
FROM v$sqlarea
ORDER BY disk_reads

EXECUTIONS indicates the number of times the SQL statement has been executed since it entered the shared SQL area. BUFFER_GETS indicates the collective number of logical reads issued by all executions of the statement. DISK_READS shows the collective number of physical reads issued by all executions of the statement. (A logical read is a read that resulted in a cache hit or a physical disk read. A physical read is a read that resulted in a physical disk read.)

You can review the results of this query to find SQL statements that perform lots of reads, both logical and physical. Consider how many times a SQL statement has been executed when evaluating the number of reads.

Conclusion

This brief document gives you the basic information you need in order to optimize the buffer cache size for your Oracle database. Also, you can zero in on SQL statements that cause a lot of I/O, and data files that experience a lot of full table scans.

Transaction Internals

12:59 PM / Posted by Azar Mohamed Shaj / comments (0)

Here i hope this ppt document useful for know about oracle transaction


http://www.juliandyke.com/Presentations/TransactionInternals.ppt

Super fast Database Copying/Cloning

5:54 PM / Posted by Azar Mohamed Shaj / comments (0)

Super fast Database Copying/Cloning



A database cloning procedure is especially useful for the DBA who wants to give his developers a full-sized TEST and DEV instance by cloning the PROD instance into the development server areas.

This Oracle clone procedure can be use to quickly migrate a system from one UNIX server to another. It clones the Oracle database and this Oracle cloning procedures is often the fastest way to copy a Oracle database.

STEP 1: On the old system, go into SQL*Plus, sign on as SYSDBA and issue: “alter database backup controlfile to trace”. This will put the create database syntax in the trace file directory. The trace keyword tells oracle to generate a script containing a create controlfile command and store it in the trace directory identified in the user_dump_dest parameter of the init.ora file. It will look something like this:

STARTUP NOMOUNT
CREATE CONTROLFILE REUSE DATABASE "OLDLSQ" NORESETLOGS
NOARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 2
MAXDATAFILES 240
MAXINSTANCES 1
MAXLOGHISTORY 113
LOGFILE
GROUP 1 ('/u03/oradata/oldlsq/log1a.dbf',
'/u03/oradata/olslsq/log1b.dbf') SIZE 30M,
GROUP 2 ('/u04/oradata/oldlsq/log2a.dbf',
'/u04/oradata/oldlsq/log2b.dbf') SIZE 30M
DATAFILE
'/u01/oradata/oldlsq/system01.dbf',
'/u01/oradata/oldlsq/mydatabase.dbf'
;
# Recovery is required if any of the datafiles are restored
# backups, or if the last shutdown was not normal or immediate.
RECOVER DATABASE
# Database can now be opened normally.
ALTER DATABASE OPEN;


STEP 2: Shutdown the old database

STEP 3: Copy all data files into the new directories on the new server. You may change the file names if you want, but you must edit the controlfile to reflect the new data files names on the new server.

rcp /u01/oradata/oldlsq/* newhost:/u01/oradata/newlsq
rcp /u01/oradata/oldlsq/* newhost:/u01/oradata/newlsq
rcp /u03/oradata/oldlsq/* newhost:/u03/oradata/newlsq
rcp /u04/oradata/oldlsq/* newhost:/u04/oradata/newlsq


STEP 4: Copy and Edit the Control file – Using the output syntax from STEP 1, modify the controlfile creation script by changing the following:

Old:

CREATE CONTROLFILE REUSE DATABASE "OLDLSQ" NORESETLOGS

New:

CREATE CONTROLFILE SET DATABASE "NEWLSQ" RESETLOGS

STEP 5: Remove the “recover database” and “alter database open” syntax

# Recovery is required if any of the datafiles are restored
# backups, or if the last shutdown was not normal or immediate.
RECOVER DATABASE
# Database can now be opened normally.
ALTER DATABASE OPEN;


STEP 6: Re-names of the data files names that have changed.

Save as db_create_controlfile.sql.

Old:

DATAFILE
'/u01/oradata/oldlsq/system01.dbf',
'/u01/oradata/oldlsq/mydatabase.dbf'

New:

DATAFILE
'/u01/oradata/newlsq/system01.dbf',
'/u01/oradata/newlsq/mydatabase.dbf'

STEP 7: Create the bdump, udump and cdump directories

cd $DBA/admin
mkdir newlsq
cd newlsq
mkdir bdump
mkdir udump
mkdir cdump
mkdir pfile


STEP 8: Copy-over the old init.ora file

rcp $DBA/admin/olslsq/pfile/*.ora newhost:/u01/oracle/admin/newlsq/pfile

STEP 9: Start the new database

@db_create_controlfile.sql

STEP 10: Place the new database in archivelog mode

Super fast Database Copying/Cloning tips by burleson consulting

Full Database Export

4:51 PM / Posted by Azar Mohamed Shaj / comments (0)

When and why to perform a full database export.

* Export/import is only the possible way to construct a similar database on remote servers regardless of Operating System compatibility i.e. cross platform (Export on Linux and Import on Windows Systems).

* To migrate the database from one Operating System to another i.e. (Linux/Unix to Windows), and to upgrade the database from one version to another i.e. 8i-9i or 9i-10g, but not vice versa.

* If you are on Oracle 10g, above two points can be possible, using RMAN Convert Commands i.e. to construct or move a database from one OS to another OS i.e. Cross platform.

* When there is a requirement to clone a database on another test or development server, then take a full database mode export dump and create a database with the similar settings of target database, and import the dump in source database.

Simple Steps: how to perform a full database export using export utility.

* Use either system user or any other database user who has the EXP_FULL_DATABASE privilege.
* Set the NLS_LANG environment variable according the database character set and language details.
SQL> select * from nls_database_parameters
2 where parameter in ('NLS_LANGUAGE','NLS_TERRITORY','NLS_CHARACTERSET');

PARAMETER VALUE
------------------------------ ----------------------------
NLS_LANGUAGE AMERICAN
NLS_TERRITORY AMERICA
NLS_CHARACTERSET WE8ISO8859P1

Windows (Dos Prompt):
C:\> set NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1

Unix/Linux:
$ export NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1

* Start the export with following command and options.

exp system/password@mydb file=c:\exportdmp\exp_fulldb_MYDB_27Aug08.dmp
full=y log= c:\exportdmp\exp_fulldb_MYDB_27Aug08.log

Note: This is just a simple export command to perform the full database export. I would request and suggest you to refer Oracle Documentations on export/import and their options. Check the references.

Help on Export and Import:

Windows:
C:\> exp help=y
C:\> imp help=y

Change Oracle password

1:49 PM / Posted by Azar Mohamed Shaj / comments (1)

Location of Oracle Password hashes

* Database -sys.user$ -password
*Oracle Password File
*Data File of the system Tablespace
*(full) Export Files
*archive logs

Show oracle password hashkey

sql>select username,password from dba_users;

sql>select name,password from sys.user$ where password is notnull.

How to change an Oracle Password?

You should always use the pasword because the password is sent unencrypted over the net If you use the alter user syntax.

* alter user myuser identified bu my!supersecretpassword;

*grant connect to myuser identified by my!supersecretpassword

*update sys.user$ set password='F894844C34402B67' WHERE name='SCOTT';restart the database necessary

*sql*plus command:password or password username.

How to change an Oracle password temporarily?

In Oracle it is possible to change a password temporarily. This can be useful for DBA which act as a different user.

SQL> select username,password from dba_users where username='SCOTT';

USERNAME PASSWORD-------- ----------------SCOTT F894844C34402B67

SQL> alter user scott identified by mypassword;

Now login with the following credentials: scott/tigerAfter doing your work you can change the password back by using an undocumented feature called "by values"

SQL> alter user scott identified by values 'F894844C34402B67';

http://www.red-database-security.com/whitepaper/oracle_passwords.html

Oracle Database created as 32bit or 64bit

7:33 PM / Posted by Azar Mohamed Shaj / comments (0)

We can check the Database Created as 32bit Or 64bit Platform through below sql query


SQL> select metadata from sys.kopm$ ;METADATA
-----------------------------------------------------------0000006001240F050B0C030C0C0504050D0609070805050505050F05050505050A050505050504050607080823472323081123081141B023008300B207D00300000000000000000000000000000000000000000000000000000000000000000000000000


If above Colored Character Is "B023" means 32bit


Or
If it is "B047" means 64bit.

HWM -High Water Mark

7:19 PM / Posted by Azar Mohamed Shaj / comments (0)

Oracle, stores data in segments. Segments has free and used space. The mark(a logical mark) between free and used space can be called as high water mark(HWM). When new data filled, the mark moves. On deleting records, oracle does not move back HWM. That is, you have unused data but, HWM sees as if it were filled data. When you shrink table such as alter table move command, this HWM refreshes itself.

When you delete from a table, a DML operation, oracle undos the changing. That is you can move back, with some additional works. But when you truncate a table, a DDL operation, oracle moves back the HWM. So this is faster than normal delete clause. With delete, you will fire existing triggers whereas truncate does not.

Truncate is especially is used instead of dropping and recreating tables. with truncate, no objects will become invalid and grants are remains the same

What Is Bottleneck ?

Bottleneck is the slowest operations of the set of operations. It is important to identify in the bottleneck in the system to know what needs to be turned to be better Performance and scalabilty.

Some Examples :-

The I/O system may become a bottleneck on large database system.

If a database used more memory than what’s available,paging and swapping will become a bottleneck.

In a RAC system the interconnect may be overloaded (cannot handle all data transfers

Connect sysdba

7:09 PM / Posted by Azar Mohamed Shaj / comments (0)

There is two method to connect sysdba user :-

with password

without password

sql> conn / as sysdba ------ without password

* Oracle database user must have sysdba privileges.
* Os user must add in dba group.
* In sqlnet.ora file must content "NTS" in below line

sqlnet.authentication_services = (NTS)
****************************************************
with password

sql>conn sys/pwd as sysdba ------- with password

prevent to connect without password.
* remove os user from dba group.
* edit sqlnet.ora file and change "none" to "nts" in below line.
sqlnet.authentication_services =(none)
Default value is nts when sqlnet.ora file created.
***************************** DBA ******************

IMPORT DUMPFILE WITHOUT KNOW EXPORT USER NAME

6:45 PM / Posted by Azar Mohamed Shaj / comments (0)

cmd>imp file= log= full=y show=y indexfile=<> userid=system/password

for example

cmd>imp file=d:\test.dmp log=d:\tb\test.log full=y show=y indexfile=d:\tb\test.sql system/manager

after execute this commands,you should go to check indexfile and logfile.