Professional Documents
Culture Documents
Alejandro Vargas,
Oracle Israel,
Principal Support Consultant
Yes, there are several production sites using dataguard that are bigger than 10TB
I was able to find specific references for 251 Production sites using Dataguard Physical Standby + RAC
In terms of OS they are distributed in this way:
HPUX 40
Solaris 57
Linux 83
Other 71
Yes uncommited transactions that are lately rolled-back will be written to flashback archived logs.
1) Yes. Once a flashback is implemented and the database opened using resetlogs, a second flashback can
be implemented to an SCN that is smaller than that of the first flashback, the database incarnation will be set
accordingly.
2) No With flashback
Yes. Using log mining.
References:
http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14239/upgrades.htm#sthref2055
Note 407040.1: How To Upgrade A Primary Database And A Physical Standby To Oracle10gR2 (10.2)
7) Trigger related information
http://www.psoug.org/reference/table_trigger.html
http://www.psoug.org/reference/ddl_trigger.html
http://www.psoug.org/reference/system_trigger.html
http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14251/adfns_triggers.htm#ADFNS012
http://www.oracle.com/customers/products/database.html
http://www.oracle.com/customers/products/rac.html
Reference able customers may be contacted through your Oracle Sales Manager or SDM
Reference able customers may be contacted through your Oracle Sales Manager or SDM
11) Strategies for migration from FS to ASM
• Rman
• Dbms_file_transfer
• Online reconfiguration
• Ftp
• Alter table move.
• Gradual migration possible by having datafiles on both FS and ASM
• The data can be moved over using any available downtime windows and any of the referred
methods.
13) Which is the difference for the statistics level parameter in 9i and 10g
SELECT STATISTICS_NAME,
SESSION_STATUS,
SYSTEM_STATUS,
ACTIVATION_LEVEL,
SESSION_SETTABLE
FROM v$statistics_level
ORDER BY 1
/
17 rows selected.
9i statistics_level control the following features:
8 rows selected.
No, cannot do flashback after a ddl, because it invalidated undo data for the object.
Both flashback table and flashback query will fail with error:
15) Can we restore a dropped table after a new table with the same name has been
created?
No, export can be done for objects as of time using the flashback_time or flashback_scn export parameters.
Data Definition Language statements that alter the structure of a table, such as drop/modify column, move
table, drop partition, truncate table/partition, and so on, invalidate the old undo data for the table.
It is not possible to retrieve a snapshot of data from a point earlier than the time such DDLs were executed.
To export a dropped table you will need first to execute flashback drop on it.
Sampling is comprehensive.
* In 10.2 the bind values used at parse time are available, in 9.2 and 10.1 they are not.
* The values for user binds are available, the values for system binds are not (cursor_sharing).
* Literals replaced by system binds are used for optimization even if _optim_peek_user_binds = false.
To display the values of user bind variables that were peeked when a statement was parsed, use
DBMS_XPLAN.DISPLAY_CURSOR with the undocumented 'ADVANCED' (or 'PEEKED_BINDS')
format. The peeked bind values are stored and extracted from the other_xml column of v$sql_plan.
It will be activated when setting statistics_level = typical and deactivated setting them to basic.
It can be deactivated without affecting other awr activity using event awr_flush_table_off, level 56
References :
Note 273121.1 - How To Find The Value of a Bind Variable Without Tracing
Note 296765.1 - Solutions for possible AWR Library Cache Latch Contention Issues in Oracle 10g
Note 43808.1 - VIEW: "V$SQL_BIND_DATA" Reference Note
Times Ten is a different product, it's last releast, Version 7, was completely developed inside Oracle, and a
tighter integration with the Oracle 10g Database was implemented on it.
Oracle TimesTen 7 delivers several key enhancements to help customers capture, access, and manage
information significantly faster, including:
* Data type compatibility — familiar Oracle Database 10g data types are now available in Oracle
TimesTen 7 for easier application development and caching data in memory;
* New SQL features — Oracle TimesTen 7 includes enhanced SQL functionality, with similar semantics
and behaviors as Oracle Database 10g;
* Globalization functionality — more than 50 database character sets and 80 languages are now supported
with Oracle TimesTen 7;
* Automatic data aging and on-demand data loading — when used as a cache, data in Oracle TimesTen 7
can be dynamically loaded and automatically aged out of memory; and,
* Cross-tier high availability — data can be automatically synchronized across replicated Oracle TimesTen
databases and Oracle Databases
The new real-time caching capabilities of Oracle TimesTen 7 support a wide variety of applications where
instant access to specific types of data from an Oracle database is required. Oracle TimesTen 7 provides out-
of-the-box support for the most common caching scenarios:
* Dynamic, on-demand caching — often used in customer-facing applications to load data for a specific
customer when they first arrive and add or update data as the interaction progresses. In a call-center or
customer portal application, this helps speed access to information for the specific group of customers
currently being served;
* Sliding time window caching — useful in business intelligence and analytics applications, where the last
10 minutes of production data or the last 30 days of retail sales data are heavily used for real-time decision-
making; and,
* Reference data caching — useful for customer-facing Web sites or business process acceleration, where
product catalogs, business rules and metadata are heavily accessed.
In each of these scenarios, Oracle TimesTen 7 automatically manages the data movement between the cache
and the underlying enterprise Oracle database, simplifying application development. In addition, Oracle
TimesTen 7 also supports Oracle Real Application Clusters (RAC), Oracle Fusion Middleware, Oracle SQL
Developer and Oracle JDeveloper.
20) How Oracle 10g decides the # of processes to drop a partition table?
I was not able to find this information on the 10g R2 Administration, New Features, Reference or SQL
Reference Manuals.
This is compilation of the available information:
When dropping a large partitioned table in the no recoverable PURGE mode, the DROP operation is
internally split to drop chunks of partitions.
Dropping large partitioned tables can affect tens of thousands of partitions that all have to be logically
removed from the system. The capability of transparently dropping such an object in an incremental fashion
optimizes the resource consumption and positively affects the run-time behavior.
To avoid running into resource constraints, the DROP TABLE...PURGE command for a partitioned table
drops the table in multiple transactions, where each transaction drops a subset of the partitions or
subpartitions and then commits. The table becomes completely dropped at the conclusion of the final
transaction.
If the DROP TABLE...PURGE command fails, you can take corrective action, if any, and then restart the
command. The command resumes at the point where it failed.
When you drop a partitioned table with the PURGE keyword, the statement executes as a series of
subtransactions, each of which drops a subset of partitions or subpartitions and their metadata. This division
of the drop operation into subtransactions optimizes the processing of internal system resource consumption
(for example, the library cache), especially for the dropping of very large partitioned tables. As soon as the
first subtransaction commits, the table is marked UNUSABLE. If any of the subtransactions fails, the only
operation allowed on the table is another DROP TABLE ... PURGE statement. Such a statement will resume
work from where the previous DROP TABLE statement failed, assuming that you have corrected any errors
that the previous operation encountered.
You can list the tables marked UNUSABLE by such a drop operation by querying the status column of the
*_TABLES, *_PART_TABLES, *_ALL_TABLES, or *_OBJECT_TABLES data dictionary views, as
appropriate.
References:
http://st-doc.us.oracle.com/10/102/server.102/b14231/partiti.htm#ADMIN10062
http://st-doc.us.oracle.com/10/102/server.102/b14214/chapter1.htm#sthref360
http://st-doc.us.oracle.com/10/102/server.102/b14200/statements_9003.htm#i2061306
http://st-doc.us.oracle.com/10/102/server.102/b14231/partiti.htm#sthref2746
http://www.ardentperf.com/2007/03/23/tuning-sql-statement-execution-in-10g-part-2/
Typically, an accepted SQL Profile is associated with the SQL statement through a special SQL signature
that is generated using a hash function. This hash function normalizes the SQL statement for case (changes
the entire SQL statement to upper case) and white spaces (removes all extra whites spaces) before
generating the signature. The same SQL Profile thus will work for all SQL statements that are essentially
the same, where the only difference is in case usage and white spaces. However, by setting force_match to
true, the SQL Profile will additionally target all SQL statements that have the same text after normalizing
literal values to bind variables. This may be useful for applications that use literal values rather than bind
variables, since this will allow SQL with text differing only in its literal values to share a SQL Profile. If
both literal values and bind variables are used in the SQL text, or if this parameter is set to false (the default
value), literal values will not be normalized.
It is important to note that the SQL Profile does not freeze the execution plan of a SQL statement, as done
by stored outlines. As tables grow or indexes are created or dropped, the execution plan can change with the
same SQL Profile. The information stored in it continues to be relevant even as the data distribution or
access path of the corresponding statement change. However, over a long period of time, its content can
become outdated and would have to be regenerated. This can be done by running Automatic SQL Tuning
again on the same statement to regenerate the SQL Profile.
Information on captured execution plans for statements in SQL Tuning Sets are displayed in the
DBA_SQLSET_PLANS and USER_SQLSET_PLANS views.
Dynamic views containing information relevant to the SQL tuning, such as V$SQL, V$SQLAREA,
V$SQLSTATS, and V$SQL_BINDS views.
• OTN RAC
• OTN ASM
• OTN High Availability
• OTN Grid Control
• OTN Recovery Manager
• 10g R2 Online Documentation
• 10g R2 Documentation Download
• RAC Customer References
• Oracle RAC on VMware
• Oracle Software Download
• Enterprise Linux Download
• Real Application Clusters Virtual Book
• ASM Virtual Book
• Enterprise Manager Virtual Book
• RMAN Virtual Book
• Grid Computing Vitrual Book