Pl sql if inserting updating video vixen dating bill maher

November 30, 2002 - am UTC I might try an anti-join on the NOT EXISTS and doing the work in a single parallel CTAS create table invtmp as select * from invins UNION ALL select t1.* from inv t1, invdel t2 where t1= t2.key( ) and t2is null; add parallel /nologging as needed.

March 17, 2003 - am UTC sorry -- I though it obvious that in most cases "no" is the answer.

This is what we came up with concerning mass updates INV 50M INVINS 10M INVDEL 7M There are indexes on INV. KEY Execution Plan ( for deletes and updates ) ---------------------------------------------------------- 0 SELECT STATEMENT Optimizer=CHOOSE 1 0 FILTER 2 1 TABLE ACCESS (FULL) OF 'INV' 3 1 INDEX (RANGE SCAN) OF 'IX_INVDEL01' (NON-UNIQUE) alter table INVTMP nologging; -- INVINS contains delta inserts and updates insert /* APPEND */ into INVTMP select * from INVINS t1; -- INVDEL contains delta deletes and updates insert /* APPEND */ into INVTMP select * from INV t1 where not exists ( select null from INVDEL t2 where t2. KEY ); alter table INVTMP logging; drop table INV; rename INVTMP to INV -- build indexs etc This is what we came up with and is to the fastest approach we've tested.

Any comments or suggestions are welcome and appreciated.

In package I am writing, I do massive delete operation, then rebuilding indexes, then starting the next routine. November 19, 2002 - pm UTC You can use user_jobs or dba_jobs but -- you might just want to put some "logging" into your jobs themselves so you can monitor their progress and record their times.

What would be the best way to detect the end of rebuiding, in order to proceed with the next call? Thanks Tom, In our environment, we have partitioned tables and we use: ALTER TABLE table_name MODIFY PARTITION part_name REBUILD UNUSABLE LOCAL INDEXES and this rebuilds all the indexes in the partition at one shot.

I want to update and commit every time for so many records ( say 10,000 records). Fortunately, you are probably using partitioning so you can do this easily in parallel -- bit by bit.

Murali from old_table; index new_table grant on new table add constraints on new_table etc on new_table drop table old_table rename new_table to old_table; you can do that using parallel query, with nologging on most operations generating very little redo and no undo at all -- in a fraction of the time it would take to update the data. I don't have a 100million row table to test with for you but -- the amount of work required to update 1,000,000 indexed rows is pretty large.

With nologging, if the system aborts, you simply re-run the 'update' again, as you have the original data in the main table.When done, we swap the partition of original data with the 'dummy' table (the one containing new values), rebuild indexes in parallel, and wha-la! i.e, to update field2 in a table: 1) First create your dummy hold table: create table xyz_HOLD as select * from xyz where rownum Hi Tom, As u suggested us to create a new table ,then drop the original table and rename the new table to original table instead of updating a table with millions of records.But what happen to dependent objects ,everything will get invalidated. We've a similar situation., We delete around 3 million records from 30 million rows table everyday.The only one difference between your code and mine is that I issue just one commit at the end. Here is the numbers I've got: rebuilding indexes sequentually consistently took 76 sec., while using dbms_job.submit() calls took around 40 - 42 sec.I said "around", because the technique I used may not be perfect, though it served the purpose.

Leave a Reply