Friday, July 1, 2011

Innotiive Asia is Database Company now

Innotiive Asia Sdn Bhd is now a database company that specialize in Oracle, MSSQL, MySQL and DB2.

If anybody has any queries or work to be done, please contact us at info@innotiiveasia.com

Thanks

Thursday, May 20, 2010

Looking for Oracle DBAs

Hi Guys,

So long no see. Innotiive Asia is expanding. Looking for Oracle DBAs.

With Experience. Remuneration comes with experience.

Please send your resumes to hr@innotiiveasia.com

Or call me for an interview. 012-2001973

Thanks

Friday, October 2, 2009

Oracle Blog from Innotiive Asia (M) Sdn Bhd

A new oracle blog will be out somewhere on the 2nd week of October 2009 that will be maintained by Innotiive Asia's   Oracle Consultants. 

There will be alot of info for new and experienced DBAs. This blog is also used by Innotiive's DBAs to share their experience. You can also share your own experiences and also ask questions and there will a group of dedicated DBAs to answer your questions.

Please give your suggestions on how to improve this blog so that the DBAs can benefit, especially for Malaysians....


http://oracleinnotiive.wordpress.com

 

Monday, August 10, 2009

Why is excessive redo generated during an Online/Hot Backup

Just a question asked by someone and here is the answer. There is no better person to answer this question than Tom Kyte...:-)

There is not excessive redo generated, there is additional information logged into

the online redo log during a hot backup the first time a block is modified in a

tablespace that is in hot backup mode. 

 

in hot backup mode only 2 things are different:

 

o the first time a block is changed in a datafile that is in hot backup mode, the ENTIRE

BLOCK is written to the redo log files, not just the changed bytes.  Normally only the

changed bytes (a redo vector) is written. In hot backup mode, the entire block is logged

the FIRST TIME.  This is because you can get into a situation where the process copying

the datafile and DBWR are working on the same block simultaneously.  Lets say they are

and the OS blocking read factor is 512bytes (the OS reads 512 bytes from disk at a time).

 The backup program goes to read an 8k Oracle block.  The OS gives it 4k.  Meanwhile --

DBWR has asked to rewrite this block.  the OS schedules the DBWR write to occur right

now.  The entire 8k block is rewritten.  The backup program starts running again

(multi-tasking OS here) and reads the last 4k of the block.  The backup program has now

gotten an impossible block -- the head and tail are from two points in time.  We cannot

deal with that during recovery.  Hence, we log the entire block image so that during

recovery, this block is totally rewritten from redo and is consistent with itself at

least.  We can recover it from there.

 

o the datafile headers which contain the SCN of the last completed checkpoint are NOT

updated while a file is in hot backup mode.  This lets the recovery process understand

what archive redo log files might be needed to fully recover this file.

 

To limit the effect of this additional logging, you should ensure you only place one

tablepspace at a time in backup mode and bring the tablespace out of backup mode as soon

as you have backed it up.  This will reduce the number of blocks that may have to be

logged to the minimum possible.

Saturday, February 28, 2009

Will be blogging again!!!

Please accept my appologies for not blogging for 8 months...... Been busy being a new father....

Will be starting to blog very soon. Please lookup this blog for new and informative issues on Oracle Database.

Thank y0u and see you soon.....

Thursday, July 3, 2008

INITIALIZATION PARAMETERS

Server Parameters for Oracle 9i and 10g
Green = New in 10g
Red = Valid in 9i but dropped (or hidden) in 10g
Bold = Static Parameter - change in Pfile/SPfile
Normal= Dynamic Parameter - change in Pfile/SPfile/ALTER SYSTEM /SESSION
PARAMETER DESCRIPTION
------------------------------ ----------------------------------------
ACTIVE_INSTANCE_COUNT = int Active instances in the cluster
AQ_TM_PROCESSES = int Number of AQ Time Managers to start
ARCHIVE_LAG_TARGET = int Max no. seconds of redos the standby could lose
asm_diskgroups = string Disk groups to mount automatically
asm_diskstring = string Disk set locations for discovery
asm_power_limit = int Number of processes for disk rebalancing
AUDIT_FILE_DEST = 'directory' Directory in which auditing files are to reside
AUDIT_SYS_OPERATIONS = {TRUEFALSE}
AUDIT_TRAIL = {NONE FALSE DB TRUE OS} Enable system auditing 9i
AUDIT_TRAIL = {NONE DB DB_EXTENDED OS} Enable system auditing 10g
BACKGROUND_CORE_DUMP = {PARTIAL FULL}
BACKGROUND_DUMP_DEST = 'path or directory'
BACKUP_TAPE_IO_SLAVES = {TRUE FALSE} DEFERRED
BITMAP_MERGE_AREA_SIZE = int Memory for BITMAP MERGE
BLANK_TRIMMING = {TRUEFALSE}
CIRCUITS = int
CLUSTER_DATABASE = {TRUEFALSE} If TRUE startup in cluster database mode
CLUSTER_DATABASE_INSTANCES = int
CLUSTER_INTERCONNECTS = ipaddr [:ipaddr...] Interconnects for RAC use
COMMIT_POINT_STRENGTH = int
COMPATIBLE = release_number [CHAR: 9.2.0.0.0]
CONSTRAINT[S] = { IMMEDIATE DEFERRED DEFAULT }
CONTROL_FILE_RECORD_KEEP_TIME = int Time in Days
CONTROL_FILES =filename [,filename [...] ]
CORE_DUMP_DEST = 'text'
CPU_COUNT = int
CREATE_BITMAP_AREA_SIZE = int
CREATE_STORED_OUTLINES = {TRUE FALSE 'category_name' } [NOOVERRIDE]
CURSOR_SHARING = {SIMILAR EXACT FORCE}
CURSOR_SPACE_FOR_TIME = {TRUEFALSE}
CURRENT_SCHEMA = schema Change the current schema of the session
DB_2k_cache_size = int bytes Size of cache for 2K buffers
DB_4k_cache_size = int bytes Size of cache for 4K buffers
DB_8k_cache_size = int bytes Size of cache for 8K buffers
DB_16k_cache_size = int bytes Size of cache for 16K buffers
DB_32k_cache_size = int bytes Size of cache for 32K buffers
DB_BLOCK_BUFFERS = int Deprecated in favour of DB_CACHE_ SIZE
DB_BLOCK_CHECKING = {TRUE FALSE} DEFERRED
DB_BLOCK_CHECKSUM = {TRUE FALSE}
DB_BLOCK_SIZE = int [bytes] Do not alter after db creation
DB_CACHE_ADVICE = {ON READY OFF}
DB_CACHE_SIZE = int [bytes]
DB_CREATE_FILE_DEST = directory
DB_CREATE_ONLINE_LOG_DEST_n = directory (where n = 1-5)
Default locn for Oracle-managed control files and online redo logs.
DB_DOMAIN = domain_name Directory part of global database name
DB_FILE_MULTIBLOCK_READ_COUNT = int
DB_FILE_NAME_CONVERT = [(]'dbfile1' , 'dbfile2'...[)]
Datafile name convert patterns and strings
for standby/clone db [old string, new string]
DB_FILES = int
db_flashback_retention_target = int Max Flashback Database log retention (minutes)
DB_KEEP_CACHE_SIZE = int [bytes]
DB_NAME = database_name
db_recovery_file_dest = string Default database recovery file location
db_recovery_file_dest_size = int Database recovery files size limit
DB_RECYCLE_CACHE_SIZE = int [bytes]
db_unique_name = string Database Unique Name
DB_WRITER_PROCESSES = int Number of background database writer
processes to start
DBLINK_ENCRYPT_LOGIN = {TRUEFALSE} Enforce password encryption for distributed login
DBWR_IO_SLAVES = int
DDL_WAIT_FOR_LOCKS = {TRUEFALSE} Disable NOWAIT DML lock acquisitions
DG_BROKER_CONFIG_FILEn = filename (where n = 1 or 2)
DG_BROKER_START = {TRUEFALSE}
DISK_ASYNCH_IO = {TRUEFALSE}
DISPATCHERS = 'dispatch_clause' (see SQL ref manual for detail)(MTS_Dispatchers in Ora 8/9)
DISTRIBUTED_LOCK_TIMEOUT = int
DML_LOCKS = int One for each table modified in a transaction
DRS_START = {TRUEFALSE} Start DG Broker monitor (DMON process)
ERROR_ON_OVERLAP_TIME = {TRUE FALSE}
ENQUEUE_RESOURCES = int Resources for enqueues
EVENT = debug_string Debug event control
FAL_CLIENT = string Fetch archive log Client
FAL_SERVER = string Fetch archive log Server
FAST_START_IO_TARGET = int Upper bound on recovery reads(Deprecated)
FAST_START_MTTR_TARGET = int
FAST_START_PARALLEL_ROLLBACK = {FALSE LOW HIGH}
Max number of parallel recovery slaves
FILE_MAPPING = {TRUEFALSE}
FILEIO_NETWORK_ADAPTERS = char Network Adapters for File I/O
FILESYSTEMIO_OPTIONS = {none setall directIO asynch}
FIXED_DATE = {'YYYY_MM_DD_HH24_MI-SS' 'date in default format'}
Fix SYSDATE value for debugging
GC_FILES_TO_LOCKS = '{file_list=lock_count[!blocks][EACH][:...]}'
RAC/OPS - lock granularity number of
global cache locks per file (DFS)
GCS_SERVER_PROCESSES = int Number of background gcs server processes to start
GLOBAL_CONTEXT_POOL_SIZE = {1 MB int MB}
GLOBAL_NAMES = {TRUE FALSE} Enforce that database links have same
name as remote database
HASH_AREA_SIZE = int Size of in-memory hash work area (Shared Server)
HASH_JOIN_ENABLED = {TRUEFALSE}
HI_SHARED_MEMORY_ADDRESS = int SGA starting address (high order 32-bits
on 64-bit platforms)
HS_AUTOREGISTER = {TRUE FALSE} Enable automatic server DD updates in HS
agent self-registration
IFILE = parameter_file_name Include file in init.ora
INSTANCE = int Connect to a different RAC instance
INSTANCE_GROUPS = group_name [,group_name ... ]
INSTANCE_NAME = instance_id
INSTANCE_NUMBER = int
INSTANCE_TYPE = {RDBMSASM} Type of instance to be executed
RDBMS or Automated Storage Management
ISOLATION_LEVEL = {SERIALIZABLE READ COMMITTED}
JAVA_MAX_SESSIONSPACE_SIZE = int [bytes]
JAVA_POOL_SIZE = int [bytes]
JAVA_SOFT_SESSIONSPACE_LIMIT = int
JOB_QUEUE_PROCESSES = int
LARGE_POOL_SIZE = int [bytes]
LICENSE_MAX_SESSIONS = int Maximum number of non-system user sessions
(concurrent licensing)
LICENSE_MAX_USERS = int Maximum number of named users that can be created
(named user licensing)
LICENSE_SESSIONS_WARNING = int Warning level for number of non-system
user sessions
LOCAL_LISTENER = network_name Define which listeners instances register with
LOCK_NAME_SPACE = namespace Used for generating lock names for standby/primary database
assign each a unique name space
LOCK_SGA = {TRUE FALSE}
LOG_ARCHIVE_CONFIG = [SENDNOSEND] [RECEIVENORECEIVE] [ DG_CONFIG]
LOG_ARCHIVE_DEST = string
LOG_ARCHIVE_DEST_n = {null_string
{LOCATION=local_pathname SERVICE=tnsnames_service}
[MANDATORY OPTIONAL] [REOPEN[=integer]]}
LOG_ARCHIVE_DEST_STATE_n = {ENABLE DEFER} (n = 1-10)
LOG_ARCHIVE_DUPLEX_DEST = string
LOG_ARCHIVE_FORMAT = string [CHAR: "MyApp%S.ARC"]
LOG_ARCHIVE_LOCAL_FIRST = {TRUEFALSE} Establish EXPEDITE attribute default value
LOG_ARCHIVE_MAX_PROCESSES = int
LOG_ARCHIVE_MIN_SUCCEED_DEST = int Minimum number of archive destinations
that must succeed
LOG_ARCHIVE_START = {TRUE FALSE}
LOG_ARCHIVE_TRACE = int Tracing level for Archive logs
LOG_BUFFER = int bytes Redo circular buffer size
LOG_CHECKPOINT_INTERVAL = int Checkpoint threshold, # redo blocks
LOG_CHECKPOINT_TIMEOUT = int Checkpoint threshold, maximum time interval between
checkpoints in seconds
LOG_CHECKPOINTS_TO_ALERT = {TRUEFALSE} Log checkpoint begin/end to alert file
LOG_FILE_NAME_CONVERT = ['old string','new string']
Convert patterns/strings for standby/clone db
LOG_PARALLELISM = int Number of log buffer strands
LOGMNR_MAX_PERSISTENT_SESSIONS = int Maximum no of threads to mine
MAX_DISPATCHERS = int Max number of dispatchers
MAX_DUMP_FILE_SIZE = {size bytesUNLIMITED} [DEFERRED]
MAX_ENABLED_ROLES = int Max number of roles a user can have enabled
MAX_ROLLBACK_SEGMENTS = int Max number of rollback segments in SGA cache
MAX_SHARED_SERVERS = int Max number of shared servers]
mts_circuits = int Max number of circuits (10g see CIRCUITS)
mts_dispatchers Specifications of dispatchers (10g see DISPATCHERS)
MTS_LISTENER_ADDRESS Address(es) of network listener [CHAR]
mts_max_dispatchers Max number of dispatchers (10g see MAX_DISPATCHERS)
mts_max_servers Max number of shared servers (10g see MAX_SHARED_SERVERS)
MTS_MULTIPLE_LISTENERS = {TRUEFALSE} Are multiple listeners enabled?
MTS_SERVERS = int Number of shared servers to start up [NUMBER]
mts_service = string Service supported by dispatchers [CHAR]
mts_sessions = int max number of shared server sessions [NUMBER]

nls_calendar ='string' NLS calendar system name (Default=GREGORIAN)
nls_comp = {BINARY ANSI} NLS comparison, Enterprise Edition
nls_currency ='string' NLS local currency symbol
nls_date_format ='format' NLS Oracle date format
nls_date_language =language NLS date language name (Default=AMERICAN)]
nls_dual_currency = currency_symbol
nls_iso_currency = territory Override the default set by NLS_TERRITORY
nls_language = language NLS language name (session default)
nls_length_semantics = {BYTECHAR}} Default when creating new columns
nls_nchar_conv_excp = {TRUEFALSE} Raise an exception instead of
allowing an implicit conversion
nls_numeric_characters ="decimal_character group_separator"
nls_sort = {BINARY linguistic_def} Case-sensitive or insensitive sort
linguistic_def may be BINARY, BINARY_CI, BINARY_AI,
GERMAN, GERMAN_CI, etc
nls_territory = territory Territory name (country settings)
nls_time_format =time_format Time format
nls_time_tz_format = time_format Time with timezone format
nls_timestamp_format = time_format Timestamp format
nls_timestamp_tz_format = time_format Timestamp with timezone format

O7_DICTIONARY_ACCESSIBILITY = {TRUE FALSE} Allow Dictionary Access (as in Ora V7 )
OBJECT_CACHE_MAX_SIZE_PERCENT = int DEFERRED Space for application objects Max
OBJECT_CACHE_OPTIMAL_SIZE = int DEFERRED Space for application objects Min
OLAP_PAGE_POOL_SIZE =int bytes
OPEN_CURSORS = int Max # cursors per session
OPEN_LINKS = int Max # open links per Session
OPEN_LINKS_PER_INSTANCE = int Max # open links per instance
OPTIMIZER_DYNAMIC_SAMPLING = int
OPTIMIZER_FEATURES_ENABLE = {8.0.08.0.38.0.48.0.58.0.68.0.78.1.08.1.38.1.48.1.58.1.68.1.79.0.09.0.19.2.0}
Configure qry optimiser based on an Oracle release No.
OPTIMIZER_INDEX_CACHING = int Percent to cache (favour nested loop joins & IN-list)
OPTIMIZER_INDEX_COST_ADJ = int Adjust the cost of index vs FTS
OPTIMIZER_MAX_PERMUTATIONS = int Max join permutations per qry block
OPTIMIZER_MODE = [RULE CHOOSE FIRST_ROWS ALL_ROWS]
oracle_trace_collection_name =collection Name for use by Oracle TRACE
oracle_trace_collection_path =path Path to .cdf & .dat files (ORACLE_HOME/otrace/admin/cdf)
oracle_trace_collection_size =int bytes Max trace file size
oracle_trace_enable = {TRUEFALSE} Enable Oracle Trace
oracle_trace_facility_name ={ORACLED ORACLEE ORACLESM ORACLEC} TRACE event set
oracle_trace_facility_path =path TRACE definition files: ORACLE_HOME/otrace/admin/fdf/
OS_AUTHENT_PREFIX = prefix Prefix for auto-logon accounts [string]
OS_ROLES = {TRUEFALSE} Retrieve roles from the operating system
PARALLEL_ADAPTIVE_MULTI_USER = {TRUE FALSE} Tune degree of parallelism
PARALLEL_AUTOMATIC_TUNING = {TRUEFALSE} Automatic tuning
PARALLEL_EXECUTION_MESSAGE_SIZE = int bytes Message buffer size
PARALLEL_INSTANCE_GROUP = 'group' RAC: Limit instances used
PARALLEL_MAX_SERVERS = int
PARALLEL_MIN_PERCENT = int Min percent of threads required for
parallel query
PARALLEL_MIN_SERVERS = int
PARALLEL_SERVER = [TRUE FALSE] Startup in parallel server mode
PARALLEL_SERVER_instances = int No. of instances (used for sizing SGA)
PARALLEL_THREADS_PER_CPU = int
PARTITION_VIEW_ENABLED = {TRUEFALSE} Deprecated (use partition TABLES)
PGA_AGGREGATE_TARGET = int bytes Automatically size the SQL working area
plsql_code_type ={INTERPRETED NATIVE} Code-type
PLSQL_COMPILER_FLAGS = { [DEBUG NON_DEBUG] [INTERPRETED NATIVE] }
plsql_debug ={TRUE FALSE}
plsql_native_c_compiler
plsql_native_library_dir = ['Path_to_directory']
plsql_native_library_subdir_count = int
plsql_native_linker =path Path to linker
plsql_native_make_file_name =path Pathname of make file
plsql_native_make_utility =path Pathname of make utility
plsql_optimize_level Optimize level
PLSQL_V2_COMPATIBILITY = {TRUE FALSE} [DEFERRED]
plsql_warnings =string Compiler warnings settings
See also DBMS_WARNING and DBA_PLSQL_OBJECT_SETTINGS
PRE_PAGE_SGA = {TRUEFALSE} Pre-page sga for process
PROCESSES = int User processes
QUERY_REWRITE_ENABLED = {FORCE TRUE FALSE} [DEFERRED NOOVERRIDE]
QUERY_REWRITE_INTEGRITY = {ENFORCED TRUSTED STALE_TOLERATED}
RDBMS_SERVER_DN = Distinguished Name
READ_ONLY_OPEN_DELAYED = {TRUE FALSE} Delay opening read_only files until first access
RECOVERY_PARALLELISM = int Server processes to use for parallel recovery
REMOTE_ARCHIVE_ENABLE = [RECEIVE[,SEND] FALSE TRUE]
Enable or disable sending archived redo logs to/from remote destinations
REMOTE_DEPENDENCIES_MODE = {TIMESTAMP SIGNATURE}
Remote-procedure-call dependencies mode
REMOTE_LISTENER =network_name
REMOTE_LOGIN_PASSWORDFILE ={NONE SHARED EXCLUSIVE} Use a password file
REMOTE_OS_AUTHENT = {TRUE FALSE} Allow non-secure remote clients to use
auto-logon accounts
REMOTE_OS_ROLES = {TRUE FALSE} Allow non-secure remote clients to use os roles
REPLICATION_DEPENDENCY_TRACKING = {TRUE FALSE}
RESOURCE_LIMIT = {TRUE FALSE} Master switch for resource limit
RESOURCE_MANAGER_PLAN = plan_name Turn on Resource Manager plan
resumable_timeout =seconds Set resumable_timeout
ROLLBACK_SEGMENTS = (rbs1 [, rbs2] ... )
ROW_LOCKING = [ALWAYS DEFAULT INTENT]
SERIAL_REUSE = {DISABLE SELECT DML PLSQL ALL} Cursor memmory management
SERVICE_NAMES = db_service_name [,db_service_name [,...] ]
SESSION_CACHED_CURSORS = int Session cursors to cache
SESSION_MAX_OPEN_FILES = int Max no. of BFiles (LOB) each session can open
SESSIONS = int Max no. of user and system sessions
SGA_MAX_SIZE =int bytes Initial SGA size
sga_target = int bytes Target size of SGA
SHADOW_CORE_DUMP = {PARTIAL FULL NONE} Include SGA in core file
SHARED_MEMORY_ADDRESS = int SGA starting address (platform specific)
SHARED_POOL_RESERVED_SIZE = int bytes Reserved area of shared pool
SHARED_POOL_SIZE = int Size of shared pool
SHARED_SERVERS = int Number of shared servers to start up (MTS)
SHARED_SERVER_SESSIONS = int Max number of shared server sessions
SKIP_UNUSABLE_INDEXES = {TRUE FALSE}
smtp_out_server = server_clause utl_smtp server and port configuration parameter
SORT_AREA_RETAINED_SIZE =int bytes [DEFERRED] UGA Memory to retain (Shared Server)
SORT_AREA_SIZE = int bytes [DEFERRED] In-memory sort work area (Shared Server)
SORT_MULTIBLOCK_READ_COUNT Obsolete in 9i
SPFILE =spfile_name Parameter file
sp_name =name Service Provider Name
SQL92_SECURITY = {TRUE FALSE} Require select privilege for update/delete
SQL_TRACE = {TRUE FALSE} Enable SQL trace
sqltune_category =category Qualifier for applying hintsets
SQL_VERSION =version Sql language version, for compatibility
STANDBY_ARCHIVE_DEST = 'filespec' Standby database archivelog destination
STANDBY_FILE_MANAGEMENT = {MANUAL AUTO}
Automate file mmanagement on standby DB
STAR_TRANSFORMATION_ENABLED = {TEMP_DISABLE TRUE FALSE}
STATISTICS_LEVEL = {ALL TYPICAL BASIC} Collect Statistics
streams_pool_size = int bytes Size of the streams pool
TAPE_ASYNCH_IO = {TRUE FALSE} Allow I/O requests to tape devices at the same time as CPU processing
THREAD =int Redo thread to use (RAC)
TIMED_OS_STATISTICS = int Gather OS statistics every x seconds
TIMED_STATISTICS = {TRUE FALSE} Collect time statistics
TIME_ZONE = '[+ -] hh:mm' LOCAL DBTIMEZONE 'time_zone_region'
TRACE_ENABLED = {TRUE FALSE} Trace execution path (Internal use only-Oracle support services)
TRACEFILE_IDENTIFIER = "traceid" Trace file custom identifier
TRANSACTION_AUDITING = {TRUE FALSE} [DEFERRED]
TRANSACTIONS = int Max. number of concurrent active transactions
TRANSACTIONS_PER_ROLLBACK_SEGMENT = int
UNDO_MANAGEMENT = {MANUAL AUTO} Undo space management mode (Manual=rollback segs)
UNDO_RETENTION = int Undo retention in second
UNDO_SUPPRESS_ERRORS = {TRUE FALSE} Suppress RBU errors in SMU mode
UNDO_TABLESPACE =undoname Select an undo tablespace
USE_INDIRECT_DATA_BUFFERS = {TRUEFALSE} Configure SGA Memory cache for >4Gb RAM
USE_PRIVATE_OUTLINES = {TRUE FALSE category_name }
USE_STORED_OUTLINES = { TRUE FALSE category_name} [NOOVERRIDE]
USER_DUMP_DEST = 'directory_name' User process dump directory
UTL_FILE_DIR Utl_file accessible directories list
UTL_FILE_DIR ='Path1', 'Path2'..
or
UTL_FILE_DIR ='Path1' # Must be
UTL_FILE_DIR ='Path2' # consecutive entries
WORKAREA_SIZE_POLICY = {AUTO MANUAL} Policy used to size SQL working areas

http://www.ss64.com/orasyntax/initora.html

Monday, June 16, 2008

ALTER TABLESPACE OFFLINE vs. ALTER DATABASE DATAFILE OFFLINE

There is a difference between:
Taking the tablespace offline and taking the datafiles offline

ALTER TABLESPACE (tablespace_name ) OFFLINE; does a checkpoint on all datafiles and then takes the datafiles offline.

ALTER DATABASE DATAFILE (datafile_path) OFFLINE;does not perform a checkpoint, so that if the database is open, you may need to perform media recovery when bringing it online.

That is the reason why:
You cannot do ‘alter database datafile (datafile_path) offline’ if you are in noarchivelog (but tablespace offline works)
You cannot do ‘alter tablespace (tablespace_name) offline’ if database is read-only (but datafile offline works)

Note that in both cases, you can check the STATUS column from v$datafile to see if the file is online, offline or needs recovery.

http://www.quest-pipelines.com/pipelines/dba/tips06.htm

Wednesday, June 11, 2008

System Change Number (SCN)

The system change number (SCN) is an ever-increasing internal timestamp that uniquely identifies a committed version of the database. Every time a user commits a transaction, Oracle records a new SCN. You can obtain SCNs in a number of ways, for example,

a) alert log.
b) select dbms_flashback.get_system_change_number from dual;

The SCN plays an important role to determine if the database is in a consistent state or not when the database is brought up. The background process SMON checks the SCN in all datafile headers when the database is started. Everything is fine if all of these SCNs matches with the SCN found in the controlfile. They must be the same. If the SCNs are not same, the database is in an inconsistent state. Recovery is needed.

The SCN plays a vital role for providing consistent reads.

Basically, it works as follows: The query reads a db block. This block has as an attribute the SCN when it was last changed. If this SCN is greater than the SCN that was in place when (our) query began, it means that the block was changed after we have started our query. So we have to find an older version of the block. If this block is found in the rollback segments/undo, we use it for our query.
When a (reading) query starts, it is important that the values of the rows selected are the same as when the query started, even if another session has changed those rows. This is refered to as read consistency. Read consistency is achieved through the SCN (system change number).

When a checkpoint occurs, Oracle must update the headers of all datafiles to record the details of the checkpoint. This is done by the CKPT process. The CKPT process does not write blocks to disk; DBWn always performs that work.

The System Change Number (SCN) is how Oracle internally keeps track of the timing of events (i.e., transactions) in the database. The SCN does not have a time associated with it, but a higher SCN occurred after a lower SCN. There is no correlation between the SCN values and the time between when the SCNs were generated. Two SCNs that differ by a value of 10 could be generated one second apart or 10 minutes apart.

Since the online redo log files contain transaction information, it is useful to know when that transaction took place, relative to other transactions. So SCNs are written to the online redo log files. Any record written to the online redo log file has an associated SCN so the database will know when that change took place.

The SCN is stored in other places as well. One of the most important is the control files. The control file keeps track of changes to the database. As such, it needs to know the SCN.
The datafile headers also contain the SCN of the most recent transaction that changed the datafile’s contents. When you start Oracle, Oracle checks the SCN in the datafile with the SCN in the control file. If the SCN in the datafile is “older” than the SCN in the control file, Oracle knows that the file needs recovery. When you make a tablespace READ ONLY, all of its datafiles have the SCN frozen, but Oracle does not worry about this because it knows the files have not had changes made to it.
Another place the SCN is stored is in the data block. The SCN in the block will show when the block’s contents were last updated.

From
http://jenniferlinca.wordpress.com/category/backup-and-recovery/
http://searchoracle.techtarget.com/expert/KnowledgebaseAnswer/0,289625,sid41_gci1188377,00.html

Thursday, April 10, 2008

Oracle Index Monitoring

The Oracle engine maintains all indexes whether it is used or not.There are so many applications out there that creates alot of indexes without knowing Maintenance of these indexes will take resources.Too many unused indexes may also slow down the system ecpecially on systems where there is write intensive.
Index monitoring is a way to identify unused indexes so that you can safely remove them.

TO START OR STOP INDEX MONITORING

Alter index INDEX_A MONITORING USAGE;
Alter inded INDEX_A NOMONITORING USAGE;


When you monitor these indexes, make sure you run this during the lifecycle of your applcation.Do not run this when your application is not running because you may not get the true picture.
The information on the monitoring of index usage will be displayed in

SELECT index_name, table_name, monitoring, used, start_monitoring, end_monitoring
FROM v$object_usage
WHERE index_name = 'INDEX_A'
ORDER BY index_name;

You must connect as the object owner to see the usage.

One flaw in this method is it simply isn't granular enough. All it will tell me is that an index has been used but doesn't give a picture of how often it is used.

There is a way to do it but it is another story...:-)

Thursday, March 27, 2008

Optimizer_mode and statistics gathering in 10g

The OPTIMIZER_MODE parameter has a new default value of ALL_ROWS in OracleDatabase 10g.
In Oracle Database 10g, the Optimizer uses dynamic sampling to get statistics for these tables and uses Cost Based Optimizer(CBO). The other two possible values are FIRST_ROWS_N and FIRST_ROWS.
The CHOOSE and RULE values are no longer supported by Oracle.

In Oracle Database 10g an automatic statistics gathering job is enabledby default when a database is created .This is because of the default setting STATISTICS_LEVEL=TYPICAL.
This is also true when you do database upgrade. Oracle will automatically collect statistics for alldatabase objects which are missing statistics by running an Oracle Scheduler job(GATHER_STATS_JOB) during a predefined maintenance window.
Statistics on a table are considered stale when more than 10% of the rows are changed.
The maintenance window by default is opens every night from 10:00 pm to 6:00 am and all day on weekends.This can be changed.

In 9i to gather statistics automatically on tables, you have to set MONITORING on when creating the table or using the alter table command.
In 10g MONITORING and NOMONITORING is no longer in use.It has been depreciated.

Monitoring tracks the approximate number of DMLs (INSERT, UPDATE, and DELETE)operations for the table since the last time statistics were gathered. information about how many rows are affected is maintained in the SGA, until periodically (about every 15 minutes.This can be changed) after that the background process called SMON flush the data into the data dictionary.(another use of SMON )
This data dictionary information is made AVAILABLE through the tablesDBA_TAB_MODIFICATIONS, ALL_TAB_MODIFICATIONS or USER_TAB_MODIFICATIONS views.
Oracle uses these views to identify tables with stale statistics. Whenever there is 10% change in data in a table, Oracle considers the statistics to be stale.Then it will start to collect statiscts during the maintanence window (STATISTICS_LEVEL=TYPICAL. That is the default)
Since in 10g CBO is used, having up to date statistics is so important to generate good execution plans. Automatic statistics collection is the way to go in 10g.

Wednesday, March 26, 2008

Oracle Remote Diagnostic Agent (RDA)

Remote Diagnostic Agent (RDA) is a set of command-line diagnostic scripts that are executed by an engine written in the Perl programming language.
RDA is a good tool to gather detailed information about an Oracle environment. The data gathered or the output is used to aid in problem diagnosisand also also for seeing the overall system configuration.

Oracle Support encourages the use of RDA because it provides a detail picture of the customer's environment. . RDA is designed to be as unobtrusive as possible; it does not modify Oracle or OS systems in any way. It collects useful data for diagnostic or to see the overall picture.

OS Platforms that are supported
IBM AIX
Intel Linux (Oracle Enterprise, RedHat, and SuSE)
HP-UX (10.* and 11.*)
HP Tru64 Linux PPC
Mac OS X/Darwin
Sequent Dynix/Ptx
Sun Solaris (2.6 - 2.10)
Windows XP Professional
Windows 2000 Workstation and Server
Windows 2003 Server
HP OpenVMS Alpha 7.3-2 and above
HP OpenVMS Itanium
ZLinux


Oracle Products that are supported
Oracle RDBMS Server (Standard and Enterprise Editions)
Oracle RAC Cluster (Single/Multiple Nodes, Automatic Storage Management, Oracle Cluster File System, Data Guard)
Oracle Application Server (iAS 1.0.2.x/9.0.x/10.1.2.x/10.1.3.x,10.1.4.x,
HTTP Server,WebCache,J2EE/OC4J)
Oracle Oracle Data Integrator
Oracle Management Server and Intelligent Agent (Grid Server, Agent Server, DB Control) OLAP Products (Express Server, Financial Analyzer, and Demand Planning Server)
Oracle Developer (Forms and Reports)
Oracle JDBC/PRO *C/ODBC and other development client related information
Oracle Collaboration Suites (Email Server,Calendar,Discussions,Content Services,Workspaces,WebClient, and Wireless)
Oracle Identity Management (Internet Directory and COREid)
Oracle Networking products
Oracle Ebusiness Suite 11i and 12
Oracle Portal Software 7.0, 7.2, and 7.3
Oracle Retail (Retek)

for more information on how to download,install and run RDA,please see metalink Note: 330363.1

Monday, March 17, 2008

What a DBA shouldn't do....

Here is a list of things that a DBA shouldn't do.I got this info
from an Oracle expert Rachel Carmichael of Dragonfly Consulting LLC.She presented this topic in an event organized by International Oracle Users Group (IOUG).i think it will benefit you all.

1. Dont backup your database - This is one BIG MISTAKE you can
do.Database is slow or fast, but it is still running.When data is
gone, you are gone too.

2.Do not test the recovery features - Whe you do backup, please test
the recovery features, this is the best way to know whether you are
backing up properly.

3.If you do backup,you do not need to export - Sometimes export can
help you to recover from certain issues.A good dba must always think
ways to lower MTTR (Mean Time To Recover).

4.Do not monitor the database.Anything wrong user will complain- To
be a good DBA, always be PROACTIVE not REACTIVE.

5.Use cache hit ratios instead of wait events to monitor performance.- If older systems yes.9i and 10g BETTER use wait events.More accurate.

6.Got a performance problem? Increase your shared pool.Please investigate what the real issue is.Sometimes increasing memory can halt your system.

7.Index every single column in every single table just in case-the more index you have,the more upkeep Oracle has to do.Please always justify your index creation

8.Don't analyze tables or generate stats!- Oracle becomes more cleverer if you generate stats and also please test first.

9.Fix your space problems by turning autoextend on every datafile in every tablespace. -This is not the best way or the right way.You won't know when you are going to hit a big problem.

10.Go ahead -- apply every critical fix without testing them first.- Please test sometimes it can back fire.

11.When new releases come out, upgrade ASAP and use all the new features in each release.- Please know whether you need it or your application support it first.Always let others test first before you do it :-)

12.Grant everyone "connect," "resource" and "DBA" privileges.- This is the worse thing you can do.Give access to users who need it only.

Enjoy thinking.......

Monday, February 11, 2008

Basic Database security

As a DBA, it is your responsibility to secure your organization's database from unauthorized access.

In this article I am going to suggest some guidelines that is mandatory to have for all oracle databases.
1. Data Integrity:

Always keep a check on integrity of the data in the database. Before adding any sort of data whether important or unimportant your system must ensure that the data is valid and correct. Data Integrity prevents data from becoming invalid always returns correct results.

2. Use Access Control System:

Always grant enough or appropriate privileges to users and those privileges should be given to only those users who require them to accomplish their tasks. Excessive grant of unnecessary privileges can compromise the security of your database.

3. Role Based Authentication:

You can also grant role based authentication. You can segregate the roles into administrator, programmer, operator, user and so on. Permission to perform a certain task should be based on the privileges granted to that role. Also maintain a log of each role to keep track of each role's activities.

4. Password Protection:
Make your database root account password protected. Also maintain password for each user account in the database.

5. Use Oracle Views:

Views provide a powerful and flexible security mechanism for the database. By using views you can hide parts of database from certain users. The user is not aware of the parts missing from the view.


6. Back Up your database appropriately:
Backup your database and archive logs correctly, so that you can recovery it in case of failure. Do test your backup from time to time.

7. Maintain Log Files:
Maintain your database log files if you can.This will help you keep track of client's activities.

8. Disable Remote Access:
If it is feasible for your company, ensure that no users can get remote access to your database system. This will reduce the risk of network attacks from unauthorized or malicious users.

Monday, January 28, 2008

Opening a Oracle 10gR2 standby database in Read-Write mode for DML, DDL operations without affecting the primary database.

Task

Opening the Standby Database in Read-Write mode, do a test and bring it back to a standby database without recopying of datafiles, logfiles or recreating the standby controlfile. Then showing that the test did not have an effect on primary db.

Summary

Before proceeding, one must have a working dataguard configuration. Make sure that logs are being passed from the primary db to the secondary db. The task here is to open the standby database in read-write mode, insert some data and then revert it back to its previous consistent state with the primary db without recopying of datafiles or recreating the standby controlfile. This is similar to the snapshot feature available in 11g.

Benefit

This is helpful when one has a working dataguard configuration and wants to test some scripts on live data. After the tests, data can be reverted to its original state. This also avoids the need to create a test database or perform export or import to get the live data to test on as this is time consuming.


I. STEPS

NOTE: Make sure that the dataguard is working

NOTE: The following steps are performed on the PRIMARY

Create a table and insert some values
SQL> create table mytest(id number);
SQL> insert into mytest values(1);
SQL> insert into mytest values(2);
SQL> insert into mytest values(3);
SQL> commit;

NOTE: The following steps are performed on the STANDBY

Stop the Dataguard
a. SQL> alter database recover managed standby database cancel;

Enable Flashback
a. Check whether flashback is enabled
i. SQL> select flashback_on from v$database;
ii. If no, then
· SQL> alter database flashback on;

Create restore point
SQL> create restore point before_open guarantee flashback database;

NOTE: The following steps are performed on the PRIMARY

Archive the current log file
a. SQL> alter system archive log current;

Stop logs being shipped to the standby
a. SQL> alter system set log_archive_dest_state_2=DEFER

NOTE: The following steps are performed on the STANDBY

Activate the standby database
a. SQL> alter database activate standby database

Open the standby database
a. SQL> alter database open

Query mytest table and insert some values
a. SQL> select * from mytest; (Here you will get 3 rows)
b. SQL> insert into mytest values(4);
c. SQL> insert into mytest values(5);
d. SQL> insert into mytest values(6);
e. SQL> select * from mytest; (Here you will get 6 rows)

NOTE: The following steps are performed on the STANDBY

To revert the standby back to its previous configuration, use the restore point we created. This would means any changes to the done to the data while the standby was open is lost.
a. SQL> shutdown immediate;
b. SQL> startup mount;
c. SQL> flashback database to restore point before_open;

Switch it back to a standby database
a. SQL> alter database convert to physical standby;

NOTE: The following steps are performed on the PRIMARY

Enable logs to be shipping from primary again
a. SQL> alter system set log_archive_dest_state_2=ENABLE;

NOTE: The following steps are performed on the STANDBY

Drop the restore point
a. SQL> drop restore point before_open;

Activate the Dataguard
a. SQL> alter database recover managed standby database disconnect from session;

NOTE: The following steps are performed on the PRIMARY

Query the mytest table
a. SQL> select * from mytest (Here you will get the 3 original rows)

Check whether the logs are being shipped from the primary to the standby
a. SQL> archive log list;
b. SQL> alter system switch logfile;

NOTE: The following steps are performed on the STANDBY

Check whether the standby is receiving the logs
a. SQL> select sequence#, applied, archived from v$archived_log;