The other day I told the story of how Christian Berg went out of his way to give me some invaluable pointers, OBIEE: How to Migrate Your rpd.
Apparently I haven't been the only one putting pressure on him to share his knowledge because he started his blog yesterday. So take a stroll over to his site and welcome him, then add him to your feed reader of choice.
Saturday, March 28, 2009
Friday, March 27, 2009
Thursday, March 26, 2009
Who's Your DBA?
I'm a "kind of" DBA, but not the real deal (yet). I do things from the application perspective which all you real DBAs laugh at (stop it).
I like competition, it's good for the soul. I've begun wondering who the best DBA in the world would be. As far as I know, there is no such competition.
A good place to start though would be the Oracle Certified Masters. Based on the curriculum to achieve said credential, I'd say these people would rank among the best right?
What about those that choose not to take it, but are more than worthy? Would it be those that blog? How could I ever know? (No Howard, it doesn't really matter, it's just curiousity).
Anyway, my DBA got mentioned by one of the better known DBAs out there, Tanel Poder. I'd say that's pretty cool.
Miladin Modrakovic is his name, Oraclue is his game. (Corny, I know).
Joking aside, I like (love?) working with this guy. If there is something I don't understand I can pass it along without fear of him not knowing. That hasn't always been my experience. My first "DBA" gig I was the only one there...I knew everything, or had to. It's such a relief to know that I don't have to know everything.
So if you like that real DBA stuff, internals, diagnostics, etc., check out his site. It's well worth the read.
I like competition, it's good for the soul. I've begun wondering who the best DBA in the world would be. As far as I know, there is no such competition.
A good place to start though would be the Oracle Certified Masters. Based on the curriculum to achieve said credential, I'd say these people would rank among the best right?
What about those that choose not to take it, but are more than worthy? Would it be those that blog? How could I ever know? (No Howard, it doesn't really matter, it's just curiousity).
Anyway, my DBA got mentioned by one of the better known DBAs out there, Tanel Poder. I'd say that's pretty cool.
Miladin Modrakovic is his name, Oraclue is his game. (Corny, I know).
Joking aside, I like (love?) working with this guy. If there is something I don't understand I can pass it along without fear of him not knowing. That hasn't always been my experience. My first "DBA" gig I was the only one there...I knew everything, or had to. It's such a relief to know that I don't have to know everything.
So if you like that real DBA stuff, internals, diagnostics, etc., check out his site. It's well worth the read.
Wednesday, March 25, 2009
OBIEE: How to Migrate Your rpd
None, absolutely none of the following is anything I produced.
Again, Twitter to the rescue.
Who is Christian Berg? Well, to me, he's the guy who commented on my first OBIEE post. We corresponded back and forth via email. After that and I believe I convinced him to join twitter. He was very helpful in his emails to me, pointing me in the right direction and so on.
Anyway, shortly after my last tweet above, I received an email from Christian. It was a detailed explanation of the different ways that you could migrate your rpd file, or what you have in development/qa to production.
So, with permission, I'll reprint the email here (by the way, people need to pressure him to blog, I don't want to have to keep giving him credit here ;).
Christian Berg
1.) You have a full multi-user development environment which allows you to group your rpd objects in "Projects" and use a check-in/check-out mechanism against a central repository. I.e. there's the central rpd on the server, you check out a project, make your changes and check it back in to the server. [ link ]
2.) You merge your local repository with the production repository. This one you knew probably and can't use it since your local repository contains more changes than you actually want to transfer. So here's a little trick: strip your local rpd down to the bare minimum of changes you want to propagate. This way all the merge will do is an upsert and shove all your new objects into the central rpd. [ link ]
3.) You can use "Import from archive". It's been deprecated and inactivated by Oracle in order to push people to usethe "merge" functionality. However, it's still alive and kicking in the background. It's a nice feature if you know exactly what you want to transfer and if what you want to transfer is really "encapsulated". I.e. you don't start shooting into generic business models or stuff like that but have all in nice purpose-built objects which are added on top existing ones. Since - as far as "updating" goes - the import is the most brutal one. [ link ]
There you have it. I would encourage all of you to do the following:
1. Let Christian know that he needs to start/resume his blog.
2. Give him lots of money by way of jobs or bonuses. He is most deserving.
Again, Twitter to the rescue.
Who is Christian Berg? Well, to me, he's the guy who commented on my first OBIEE post. We corresponded back and forth via email. After that and I believe I convinced him to join twitter. He was very helpful in his emails to me, pointing me in the right direction and so on.
Anyway, shortly after my last tweet above, I received an email from Christian. It was a detailed explanation of the different ways that you could migrate your rpd file, or what you have in development/qa to production.
So, with permission, I'll reprint the email here (by the way, people need to pressure him to blog, I don't want to have to keep giving him credit here ;).
Christian Berg
1.) You have a full multi-user development environment which allows you to group your rpd objects in "Projects" and use a check-in/check-out mechanism against a central repository. I.e. there's the central rpd on the server, you check out a project, make your changes and check it back in to the server. [ link ]
2.) You merge your local repository with the production repository. This one you knew probably and can't use it since your local repository contains more changes than you actually want to transfer. So here's a little trick: strip your local rpd down to the bare minimum of changes you want to propagate. This way all the merge will do is an upsert and shove all your new objects into the central rpd. [ link ]
3.) You can use "Import from archive". It's been deprecated and inactivated by Oracle in order to push people to usethe "merge" functionality. However, it's still alive and kicking in the background. It's a nice feature if you know exactly what you want to transfer and if what you want to transfer is really "encapsulated". I.e. you don't start shooting into generic business models or stuff like that but have all in nice purpose-built objects which are added on top existing ones. Since - as far as "updating" goes - the import is the most brutal one. [ link ]
There you have it. I would encourage all of you to do the following:
1. Let Christian know that he needs to start/resume his blog.
2. Give him lots of money by way of jobs or bonuses. He is most deserving.
Tuesday, March 24, 2009
Random Things
No Oracle stuff here, just feel like writing.
Kate
Kate, my little monster, decided to go to the hospital again. Penuemonia (yes, I spelled it wrong, that's how we say it around here). Last week her lungs were beautiful, probably the best doc has ever seen them. Her head was another matter, not a single black spot to be found. Diagnosis: sinus infection. Apparently it traveled down to her lungs. Hopefully it will be a short stay.
Words
"Publicly?" Why do I want to spell it "Publically?" Pub-lick-lee.
Kris and Little Chet
Kris let Little Chet watch Terminator 2 last night. Seriously? I watched most of it with him and there really isn't that much gore. A lot of bullets flying, but it's 2 robots shooting at each other right? I came down to check on him during the final scene of the movie when the Terminator is put into the super hot molten stuff. He was crying. He didn't want the Terminator to die.
Coaching Baseball
I've been "coaching" Little Chet's 5-6 year old baseball team. It's coach pitch the first 5 then bring out the tee. Hands down the best stress reliever of all time. I get to yell at a bunch of kids for and hour and a half. Is there a better way to live? I don't think so.
Me Yelling, neigh, Screaming
Thanks for listening.
Kate
Kate, my little monster, decided to go to the hospital again. Penuemonia (yes, I spelled it wrong, that's how we say it around here). Last week her lungs were beautiful, probably the best doc has ever seen them. Her head was another matter, not a single black spot to be found. Diagnosis: sinus infection. Apparently it traveled down to her lungs. Hopefully it will be a short stay.
Words
"Publicly?" Why do I want to spell it "Publically?" Pub-lick-lee.
Kris and Little Chet
Kris let Little Chet watch Terminator 2 last night. Seriously? I watched most of it with him and there really isn't that much gore. A lot of bullets flying, but it's 2 robots shooting at each other right? I came down to check on him during the final scene of the movie when the Terminator is put into the super hot molten stuff. He was crying. He didn't want the Terminator to die.
Coaching Baseball
I've been "coaching" Little Chet's 5-6 year old baseball team. It's coach pitch the first 5 then bring out the tee. Hands down the best stress reliever of all time. I get to yell at a bunch of kids for and hour and a half. Is there a better way to live? I don't think so.
Me Yelling, neigh, Screaming
"Come on! Hustle of my field!"I think the parents enjoy the show I put on...at least I hope they do.
"Quit playing in the dirt!"
"That's why you wear a cup!"
"I'll hit you with this tennis ball if you walk off that field again!"
"No wrestling!"
Thanks for listening.
Monday, March 23, 2009
Cisco Fatty - A Cautionary Tale
For those of you who don't know, last week a young woman interviewed at Cisco and shortly after leaving published the following tweet:
Since I have some experience in the matter, I figured I would chime in.
Last year, about this time, I blogged about WellCare's layoffs at the time. Looking back I find it...funny? that I didn't know better. Despite people telling me not to do it. Maybe I was just stubborn. Anyway, I blogged it and in less than an hour it came across my Google Alerts, along with everyone else in the company who had set up Google Alerts for WellCare. Oops.
I removed the post that afternoon when my VP gave me an earful and the rest is history. My contract was terminated the following Tuesday when I returned to work.
With Twitter it's even easier though. There's an illusion of privacy I think, especially for those who don't truly understand the social web. I didn't completely understand it, or what the consequences could be. Perhaps I did though and just didn't care. I still wrestle with that. I'm not proud of the way it went down, but it's in the past now.
That said, I think I took responsibility for my actions. I have expressed regret here and in interviews. I don't blame anyone but myself.
@theconnor though seems to be taking another tact. While stating
I think she was on the right path with her decision to post, but I don't think she goes far enough to show she's learned from it. She had a golden opportunity to redeem herself ever so slightly, but she didn't. Humility in this instance would have gone a long way in her future endeavors.
Update
Hat tip to Jake
Cisco just offered me a job! Now I have to weigh the utility of a fatty paycheck against the daily commute to San Jose and hating the work.Needless to say it wasn't the smartest thing in the world to do.
Since I have some experience in the matter, I figured I would chime in.
Last year, about this time, I blogged about WellCare's layoffs at the time. Looking back I find it...funny? that I didn't know better. Despite people telling me not to do it. Maybe I was just stubborn. Anyway, I blogged it and in less than an hour it came across my Google Alerts, along with everyone else in the company who had set up Google Alerts for WellCare. Oops.
I removed the post that afternoon when my VP gave me an earful and the rest is history. My contract was terminated the following Tuesday when I returned to work.
With Twitter it's even easier though. There's an illusion of privacy I think, especially for those who don't truly understand the social web. I didn't completely understand it, or what the consequences could be. Perhaps I did though and just didn't care. I still wrestle with that. I'm not proud of the way it went down, but it's in the past now.
That said, I think I took responsibility for my actions. I have expressed regret here and in interviews. I don't blame anyone but myself.
@theconnor though seems to be taking another tact. While stating
it was crass of me to say what I did and I take full responsibility for the stupidity of my action.she then goes on to talk of the impact of Twitter and that people don't really know what it is or what it's affect will be. Really? What about the people that have been fired for blogging (ahem)? What about the people that have been fired for posting on Facebook?
I think she was on the right path with her decision to post, but I don't think she goes far enough to show she's learned from it. She had a golden opportunity to redeem herself ever so slightly, but she didn't. Humility in this instance would have gone a long way in her future endeavors.
Update
Hat tip to Jake
Thursday, March 19, 2009
COLLABORATE 09 - OAUG Forum
For those that don't know, COLLABORATE 09 is May 3rd through May 7th in Orlando, Florida.
About COLLABORATE 09
There are a couple of different ways to go:
I opted for #6. How? Easy. Twitter.
At 10:41 PM Feb 4th from TweetDeck I posted:
A few days later I received this reply:
And then silence...until I pinged them again:
Got a response the next day:
Thanked them appropriately:
Then yesterday I checked my email and I had a Direct message from COLLABORATE_09_OAUG:
How cool is that?
Lesson Learned:
Don't be afraid to ask, the worse someone can say is "No."
I daresay this would not have been possible without twitter. Sure I could have looked up their email address, but I probably would not have. Twitter made this absolutely painless. Thanks to Jake for finally giving me a reason to use Twitter.
About COLLABORATE 09
The Independent Oracle Users Group (IOUG), the Oracle Applications Users Group (OAUG) and Quest International Users Group (Quest), present COLLABORATE 09, Technology and Applications Forum for the Oracle Community. As an educational conference, COLLABORATE 09 helps users of the full family of Oracle business applications and database software gain greater value from their Oracle investments. Created by and for customers, COLLABORATE 09 offers an expert blend of customer-to-customer interaction and insights from technology visionaries and Oracle strategists. Expand your network of contacts by interacting with Oracle customers, solutions providers, consultants, developers and representatives from Oracle Corporation at COLLABORATE 09.I was fortunate enough to attend Oracle OpenWorld in November of 2002 (with a few swims in the bay) and I thoroughly enjoyed it and felt like I quite a bit by going. I would love to attend as many of these conferences as I can but I haven't stayed anywhere long enough to be eligible for perks like conferences. But that's not the only way to get there...
There are a couple of different ways to go:
- Pay your own way - Conference fees, hotel and travel could run easily about $3,000.
- Have your company pay for all of it - The ideal, but not likely especially given the current financial situation most companies are facing.
- Submit a paper, have it accepted, and present - As much as I would love to do this I don't think I'm quite ready yet. This would at least get you in the door but you would still have to pay for travel and lodging.
- Get a media pass - OpenWorld has offered bloggers media passes for at least the last 2 years. You're still on the hook for travel and lodging.
- Live close to the host city so you don't have to pay for travel and lodging
- Combine #4 and #5 and make it a family vacation. Bonus points for getting your parents to pay for lodging!
I opted for #6. How? Easy. Twitter.
At 10:41 PM Feb 4th from TweetDeck I posted:
A few days later I received this reply:
And then silence...until I pinged them again:
Got a response the next day:
Thanked them appropriately:
Then yesterday I checked my email and I had a Direct message from COLLABORATE_09_OAUG:
How cool is that?
Lesson Learned:
Don't be afraid to ask, the worse someone can say is "No."
I daresay this would not have been possible without twitter. Sure I could have looked up their email address, but I probably would not have. Twitter made this absolutely painless. Thanks to Jake for finally giving me a reason to use Twitter.
Scribefire
I was reading Jake's post on JotNot and somehow I came across ScribeFire, a FireFox add-on. I'm thinking man, this is very cool. I looked at the version number, 3.2. Hmmm...maybe I'm not really an early adopter here. Google for it and found this post which describes some of the functionality.
The thing I like best about it is the split window. As I'm linking stuff up I have to do either Alt+Tab to switch windows (work) or Ctrl+Tab to shift tabs (home).
Here's what it looks like:
Yes, I still write my posts in the source view. I like ScribeFire so far and will try it out for a few posts.
The thing I like best about it is the split window. As I'm linking stuff up I have to do either Alt+Tab to switch windows (work) or Ctrl+Tab to shift tabs (home).
Here's what it looks like:
Yes, I still write my posts in the source view. I like ScribeFire so far and will try it out for a few posts.
Tuesday, March 17, 2009
Capturing Record History
Somehow I ended up on StackOverflow tonight answering this question on "Database history for client usage."
The Question
I don't particularly care for triggers as they can "hide" actions from you. But the answer IS valid, it's the easy way to go in my opinion.
As stated above, I prefer one table, with the DATE_FROM (or create date) and the DATE_THRU. DATE_THRU is null until that record has been changed in some way. If the record has been updated, the old one will get a DATE_THRU of the current date/time and a new record will be INSERTed. If DELETEd, just the UPDATE of DATE_THRU will be performed.
I tend to think of addresses when thinking of something like this. An address is created on day 1. On day 2, the person comes back and changes their zip code. There are now 2 records in the table associatied with that customer and only one that is active (Day 2's).
Like I said in my answer though, if this is a high volume table with lots of changes to be expected, you may have to rethink this (I still wouldn't go with the trigger approach). Either keep my method and have a retention policy for records that have been end dated (DATE_THRU NOT NULL) and write them off somewhere else or go with the history table, but write the records via the PL/SQL API.
You also have to ask yourself if you really need to keep track of changes. Does it really matter? Can this be handled in the design another way? Can it be handled by not letting certain records be UPDATEd or DELETEd?
The Question
I'm trying to figure out what would be the best way to have a history on a database, to track any Insert/Delete/Update that is done. The history data will need to be coded into the front-end since it will be used by the users. Creating "history tables" (a copy of each table used to store history) is not a good way to do this, since the data is spread across multiple tables.My Answer
At this point in time, my best idea is to create a few History tables, which the tables would reflect the output I want to show to the users. Whenever a change is made to specific tables, I would update this history table with the data as well.
I'm trying to figure out what the best way to go about would be. Any suggestions will be appreciated.
I am using Oracle + VB.NET
Personally, I would stay away from triggers. They can be a nightmare when it comes to debugging and not necessarily the best if you are looking to scale out.The first answer was the suggestion of a history table along with a trigger. I wanted to dive into it a little further here.
If you are using an PL/SQL API to do the INSERT/UPDATE/DELETEs you could manage this in a simple shift in design without the need (up front) for history tables.
All you need are 2 extra columns, DATE_FROM and DATE_THRU. When a record is INSERTed, the DATE_THRU is left NULL. If that record is UPDATEd or DELETEd, just "end date" the record by making DATE_THRU the current date/time (SYSDATE). Showing the history is as simple as selecting from the table, the one record where DATE_THRU is NULL will be your current or active record.
Now if you expect a high volume of changes, writing off the old record to a history table would be preferable, but I still wouldn't manage it with triggers, I'd do it with the API.
I don't particularly care for triggers as they can "hide" actions from you. But the answer IS valid, it's the easy way to go in my opinion.
As stated above, I prefer one table, with the DATE_FROM (or create date) and the DATE_THRU. DATE_THRU is null until that record has been changed in some way. If the record has been updated, the old one will get a DATE_THRU of the current date/time and a new record will be INSERTed. If DELETEd, just the UPDATE of DATE_THRU will be performed.
I tend to think of addresses when thinking of something like this. An address is created on day 1. On day 2, the person comes back and changes their zip code. There are now 2 records in the table associatied with that customer and only one that is active (Day 2's).
Like I said in my answer though, if this is a high volume table with lots of changes to be expected, you may have to rethink this (I still wouldn't go with the trigger approach). Either keep my method and have a retention policy for records that have been end dated (DATE_THRU NOT NULL) and write them off somewhere else or go with the history table, but write the records via the PL/SQL API.
You also have to ask yourself if you really need to keep track of changes. Does it really matter? Can this be handled in the design another way? Can it be handled by not letting certain records be UPDATEd or DELETEd?
Financial Services: Credit Card 101
As I stated before, I'm trying to become the Master of My Domain. Actually, I think I just get a kick of writing that down.
Anyway, I've been reading up on Credit Card processing. There is some pretty good information out there, though nothing that has really jumped out at me. Most of it is from the consumer point of view. I understand how a credit card works. You sign up or apply, the issuing bank gives you credit based on your credit history, part of that being your credit score.
Anyway, I've been looking for a big picture overview to start with and this is the best I can find to this point:
[ link ]
I want something a bit more granular, this is too high level.
I did find this .doc file (and if you don't like opening .doc files, you can view my Google Document of the same here). From there I can get some basic definitions:
The Acquirer
If you've got any good links or articles, send them on.
Anyway, I've been reading up on Credit Card processing. There is some pretty good information out there, though nothing that has really jumped out at me. Most of it is from the consumer point of view. I understand how a credit card works. You sign up or apply, the issuing bank gives you credit based on your credit history, part of that being your credit score.
A credit score, in the United States, is a number representing the creditworthiness of a person, or the likelihood that person will pay his or her debts. A credit score is primarily based on a statistical analysis of a person's credit report information, typically from the three major American credit bureaus: Equifax, Experian, and TransUnion. Lenders, such as banks and credit card companies, use credit scores to evaluate the potential risk posed by lending money to consumers and to mitigate losses due to bad debt. Using credit scores, lenders determine who qualifies for a loan, at what interest rate, and to what credit limits.You buy things (a loan), and pay back the loan when you are billed. Unfortunately, many of us carry a balance which incurs fees and interest over time. If you don't read your card holder agreement you really should.
Anyway, I've been looking for a big picture overview to start with and this is the best I can find to this point:
I want something a bit more granular, this is too high level.
I did find this .doc file (and if you don't like opening .doc files, you can view my Google Document of the same here). From there I can get some basic definitions:
The Acquirer
The acquirer is a member of MasterCard and Visa, and is contracted with merchants to accept merchant sales drafts, provide authorization terminals, instructions, and support, and handle the processing of credit card transactions. The key responsibilities of the acquirer are:The Issuer
* Sales
* Investigation Procedures
* Pricing
* Merchant Acceptance
* Support Services
* Risk Management
The issuer is responsible for the cardholder account program which encompasses nearly all aspects of cardholder account activities ranging from acquiring new customers to billing current ones. The Issuer’s responsibilities include:I'm on the path. The more exposure I get the easier this will become.
* Acquisition and marketing of new accounts
* Processing application; establishing credit credit limits and policies
* Overseeing design, manufacturing, and embossing of inventory cards
* Handling of issuing and reissuing of cards
* Overseeing PIN Numbers
* Maintaining authorization file
* Providing customer service
* Processing payments and handling settlement and income Interchange
* Establishing collections operations.
If you've got any good links or articles, send them on.
Monday, March 16, 2009
What's the Purpose of a Datawarehouse?
I've been meaning to write this since I first began my foray into the world of datawarehousing back in December of 2006.
What is the purpose of a Datawarehouse?
I also recognize that in an enterprise environment you can have the best designed OLTP systems but you still have to map the data to one field.
Ultimately, I think #1 should drive the design of all OLTP systems, making concessions where needed for performance. From there, your datawarehouse should flow fairly naturally. The ability to report on the data that you gather allows the business to make decisions for the future. In other words, it's the data stupid.
What is the purpose of a Datawarehouse?
- Reporting
- Key Performance Indicators (KPI)
- Performance - Data is stored in such a way (facts and dimensions) which enables higher performance against the same relational tables in an OLTP system.
- Slicing and Dicing - Viewing the data from multiple angles (dimensions) is the one of the best aspects of the Datawarehouse. Also included is the ability to drill down into the finer grained details.
- Single Source of Truth - Especially important in an enterprise environment which may have many systems of record. The datawarehouse allows you to aggregate all those systems into one environment. Not always completed, it's more of an ideal than anything.
- Clean Source Data (Transformation) - The datawarehouse allows you to "fix" the sub-par OLTP systems. Think garbage in, garbage out.
I also recognize that in an enterprise environment you can have the best designed OLTP systems but you still have to map the data to one field.
Ultimately, I think #1 should drive the design of all OLTP systems, making concessions where needed for performance. From there, your datawarehouse should flow fairly naturally. The ability to report on the data that you gather allows the business to make decisions for the future. In other words, it's the data stupid.
Thursday, March 12, 2009
How To Populate Your TIME Dimension
Yesterday I wrote about the Datawarehousing TIMES table. Today I populated it. I modified it for my own purposes naturally, specifically removing the fiscal components (thankfully calendar year equals fiscal year). I think that saved me days of figuring out how to calculate certain fields.
I remember the first time trying to do this took about 20 different SQL statements. I generated the key (date, time_id) and then calculate that either loop or perform another SQL statement using specific dates out of the table (last day of the month for instance). This time I managed to do it in a single SQL statement. Again, I left out some of the more complicated calculations since I did not have to worry about fiscal requirements.
I also got a short refresher course in the date format models which always come in handy and learned a new function, NEXT_DAY.
Any feedback good, bad or ugly is welcome.
I remember the first time trying to do this took about 20 different SQL statements. I generated the key (date, time_id) and then calculate that either loop or perform another SQL statement using specific dates out of the table (last day of the month for instance). This time I managed to do it in a single SQL statement. Again, I left out some of the more complicated calculations since I did not have to worry about fiscal requirements.
I also got a short refresher course in the date format models which always come in handy and learned a new function, NEXT_DAY.
NEXT_DAY returns the date of the first weekday named by char that is later than the date date. The return type is always DATE, regardless of the datatype of date. The argument char must be a day of the week in the date language of your session, either the full name or the abbreviation. The minimum number of letters required is the number of letters in the abbreviated version. Any characters immediately following the valid abbreviation are ignored. The return value has the same hours, minutes, and seconds component as the argument date.Here's the SQL to populate your TIME Dimension:
SELECTYou can find the table definition and the INSERT statement here as well.
TRUNC( sd + rn ) time_id,
TO_CHAR( sd + rn, 'fmDay' ) day_name,
TO_CHAR( sd + rn, 'D' ) day_number_in_week,
TO_CHAR( sd + rn, 'DD' ) day_number_in_month,
TO_CHAR( sd + rn, 'DDD' ) day_number_in_year,
TO_CHAR( sd + rn, 'W' ) calendar_week_number,
( CASE
WHEN TO_CHAR( sd + rn, 'D' ) IN ( 1, 2, 3, 4, 5, 6 ) THEN
NEXT_DAY( sd + rn, 'SATURDAY' )
ELSE
( sd + rn )
END ) week_ending_date,
TO_CHAR( sd + rn, 'MM' ) calendar_month_number,
TO_CHAR( LAST_DAY( sd + rn ), 'DD' ) days_in_cal_month,
LAST_DAY( sd + rn ) end_of_cal_month,
TO_CHAR( sd + rn, 'FMMonth' ) calendar_month_name,
( ( CASE
WHEN TO_CHAR( sd + rn, 'Q' ) = 1 THEN
TO_DATE( '03/31/' || TO_CHAR( sd + rn, 'YYYY' ), 'MM/DD/YYYY' )
WHEN TO_CHAR( sd + rn, 'Q' ) = 2 THEN
TO_DATE( '06/30/' || TO_CHAR( sd + rn, 'YYYY' ), 'MM/DD/YYYY' )
WHEN TO_CHAR( sd + rn, 'Q' ) = 3 THEN
TO_DATE( '09/30/' || TO_CHAR( sd + rn, 'YYYY' ), 'MM/DD/YYYY' )
WHEN TO_CHAR( sd + rn, 'Q' ) = 4 THEN
TO_DATE( '12/31/' || TO_CHAR( sd + rn, 'YYYY' ), 'MM/DD/YYYY' )
END ) - TRUNC( sd + rn, 'Q' ) + 1 ) days_in_cal_quarter,
TRUNC( sd + rn, 'Q' ) beg_of_cal_quarter,
( CASE
WHEN TO_CHAR( sd + rn, 'Q' ) = 1 THEN
TO_DATE( '03/31/' || TO_CHAR( sd + rn, 'YYYY' ), 'MM/DD/YYYY' )
WHEN TO_CHAR( sd + rn, 'Q' ) = 2 THEN
TO_DATE( '06/30/' || TO_CHAR( sd + rn, 'YYYY' ), 'MM/DD/YYYY' )
WHEN TO_CHAR( sd + rn, 'Q' ) = 3 THEN
TO_DATE( '09/30/' || TO_CHAR( sd + rn, 'YYYY' ), 'MM/DD/YYYY' )
WHEN TO_CHAR( sd + rn, 'Q' ) = 4 THEN
TO_DATE( '12/31/' || TO_CHAR( sd + rn, 'YYYY' ), 'MM/DD/YYYY' )
END ) end_of_cal_quarter,
TO_CHAR( sd + rn, 'Q' ) calendar_quarter_number,
TO_CHAR( sd + rn, 'YYYY' ) calendar_year,
( TO_DATE( '12/31/' || TO_CHAR( sd + rn, 'YYYY' ), 'MM/DD/YYYY' )
- TRUNC( sd + rn, 'YEAR' ) ) days_in_cal_year,
TRUNC( sd + rn, 'YEAR' ) beg_of_cal_year,
TO_DATE( '12/31/' || TO_CHAR( sd + rn, 'YYYY' ), 'MM/DD/YYYY' ) end_of_cal_year
FROM
(
SELECT
TO_DATE( '12/31/2002', 'MM/DD/YYYY' ) sd,
rownum rn
FROM dual
CONNECT BY level <= 6575
)
/
Any feedback good, bad or ugly is welcome.
Wednesday, March 11, 2009
Time
Many years ago (about 6 or 7 really), my boss sent me this diagram from the Oracle Docs. In particular, he wanted me to focus on the times table. After a few emails exchanged I finally figured out what he was asking, well sort of. He had used a table called periods described below:
The point he was trying to make though was that this TIMES table was a flattened version of his PERIODS table. If you have one date, you can see how that date fell in multiple dimensions. What month it took place. What quarter it took place. What day of the year (1-365) it took place. All with just a single row of data.
The TIMES table, or dimension is a standard in data warehousing for exactly this reason.
I've been working on our "data warehouse" (replicated tables from the OLTP) and had been using the PERIODS methodology. There is a time dimension, but it's not populated at this time.
Since I am by no means a data warehousing expert and I have virtually no experience with OBIEE, I'm trying to come up with a way to help bridge both those gaps. Use OBIEE against the relational tables to help me learn OBIEE. I've created a couple of small subject areas so far and demo'd them to the users and they love it. So do I in fact.
Anyway, back to my topic. As I was thinking about TIME, I began to reminisce about time and how I got started. It's funny how I remember that exact moment sitting there with my feet propped up on my desk in my garage office.
Was there a point to all this? I don't think so. Just me rambling on.
CREATE TABLE periodsThis was primarily (solely perhaps) used for reporting. He had actually built this pretty cool page for the business to be able to drill down on certain key aspects of the business. That may have been my first real exposure to Data Warehousing.
(
periodid NUMBER(10),
datefrom DATE,
datethru DATE,
periodtype VARCHAR2(30),--DAY, MONTH, YEAR, ETC
);
The point he was trying to make though was that this TIMES table was a flattened version of his PERIODS table. If you have one date, you can see how that date fell in multiple dimensions. What month it took place. What quarter it took place. What day of the year (1-365) it took place. All with just a single row of data.
The TIMES table, or dimension is a standard in data warehousing for exactly this reason.
I've been working on our "data warehouse" (replicated tables from the OLTP) and had been using the PERIODS methodology. There is a time dimension, but it's not populated at this time.
Since I am by no means a data warehousing expert and I have virtually no experience with OBIEE, I'm trying to come up with a way to help bridge both those gaps. Use OBIEE against the relational tables to help me learn OBIEE. I've created a couple of small subject areas so far and demo'd them to the users and they love it. So do I in fact.
Anyway, back to my topic. As I was thinking about TIME, I began to reminisce about time and how I got started. It's funny how I remember that exact moment sitting there with my feet propped up on my desk in my garage office.
Was there a point to all this? I don't think so. Just me rambling on.
Tuesday, March 10, 2009
What I Learned Today
I was deploying a project to our UAT environment and having problems getting the script to finish. We have these weird timeout issues that, if connected, will boot you out after 30 minutes or so. Some believe it's a firewall issue and others believe it's a security control.
This particular piece of code would bump me out after about an hour, 2/3rds of the way through.
I tried to spruce up the anonymous block by instrumenting the code with dbms_application_info.set_session_longops, removing unnecessary commits and replacing loops with set based operations.
I get it all prettied up and kick it off then open up a JDeveloper User-Defined Report to watch it run. The SQL is (from the linked post above):
Go talk to Mr. DBA and asked him if he could help me monitor. He opens up OEM, but that's not what I was thinking. I was thinking of the Toad --> DBA --> Session browser. Finally I talk him into that and there's the code running. WTF?
"Why can't I see it? I'm using v$session_longops too."
"Ah yes young Padawan, this is RAC, you must use the GV$ tables."
This particular piece of code would bump me out after about an hour, 2/3rds of the way through.
I tried to spruce up the anonymous block by instrumenting the code with dbms_application_info.set_session_longops, removing unnecessary commits and replacing loops with set based operations.
I get it all prettied up and kick it off then open up a JDeveloper User-Defined Report to watch it run. The SQL is (from the linked post above):
SELECTBut I don't see anything returning. WTF?
username,
sid,
serial#,
TO_CHAR( start_time, 'MM/DD/YYYY HH24:MI:SS' ) start_ti,
time_remaining rem,
elapsed_seconds ela,
ROUND( ( sofar / REPLACE( totalwork, 0, 1 ) ) * 100, 2 ) per,
sofar,
totalwork work,
message,
target_desc
FROM v$session_longops
WHERE start_time >= SYSDATE - 1
ORDER BY start_time DESC
Go talk to Mr. DBA and asked him if he could help me monitor. He opens up OEM, but that's not what I was thinking. I was thinking of the Toad --> DBA --> Session browser. Finally I talk him into that and there's the code running. WTF?
"Why can't I see it? I'm using v$session_longops too."
"Ah yes young Padawan, this is RAC, you must use the GV$ tables."
Jobs: Senior Oracle DBA (Datawarehousing)
Anyone looking to move down to sunny Florida?
Email (chet dot justice at the gmail) or leave a comment and I'll pass along the contact information to you.
Job Responsibility:
...seeking a senior level Oracle Datawarehousing DBA to support Oracle 10g in a datawarehouse environment. You will be one of two DBA's supporting databases in excess of 6 Terabytes. This is a production/ application environment and requires 24/7 support. The DBA will be responsible for all Oracle database administration and programming efforts on the Data Warehouse database system. This includes installing Oracle 10g, Oracle RAC, partitioning, ETL and purge efforts. Should be proficient writing stored procedures/triggers and understand how to tune those stored procedures to be able to run without impacting the production system.
Job Requirement:
- Most important requirement is that you have experience as a Senior Oracle DBA in a UNIX Data warehouse environment and proficient in an Oracle 10g environment.
- Must have the skills to support multiple Unix systems
- Strong Unix shell scripting experience
- Oracle software installation and patch management.
- Must have experience with RAC, Oracle encryption, PL/SQL programming, RMAN and backup and recovery.
- Must have experience with Database cloning, partitioning, Oracle auditing, materialized views and VLDB performance tuning.
Email (chet dot justice at the gmail) or leave a comment and I'll pass along the contact information to you.
Job Responsibility:
...seeking a senior level Oracle Datawarehousing DBA to support Oracle 10g in a datawarehouse environment. You will be one of two DBA's supporting databases in excess of 6 Terabytes. This is a production/ application environment and requires 24/7 support. The DBA will be responsible for all Oracle database administration and programming efforts on the Data Warehouse database system. This includes installing Oracle 10g, Oracle RAC, partitioning, ETL and purge efforts. Should be proficient writing stored procedures/triggers and understand how to tune those stored procedures to be able to run without impacting the production system.
Job Requirement:
- Most important requirement is that you have experience as a Senior Oracle DBA in a UNIX Data warehouse environment and proficient in an Oracle 10g environment.
- Must have the skills to support multiple Unix systems
- Strong Unix shell scripting experience
- Oracle software installation and patch management.
- Must have experience with RAC, Oracle encryption, PL/SQL programming, RMAN and backup and recovery.
- Must have experience with Database cloning, partitioning, Oracle auditing, materialized views and VLDB performance tuning.
Monday, March 9, 2009
Oracle As a Platform
I'm always hearing about this platform and that platform and all the wonderful things you can do with them.
What about Oracle as a platform?
Before you get all crazy eyes on me, I'm not talking about eBay, Google or the like. I'm talking about the other 99% of the applications out there.
Database
Oracle is the market leader in database technology. There is a very good reason for that and I'll let you decide on what that reason might be.
Web Server
Either Oracle's Application Server (Apache), which came on the Companion CD on 10g (not sure about 11g), or Oracle's built in XDB HTTP Server (pre-configured in 11g). MOD_PLSQL allows the web server to communicate with the database.
You have 2 choices (that I am aware of) for creating web based applications, Application Express and the HTP/HTF packages. The former is built atop the latter...and I'd definitely use ApEx before the packages.
Procedural Language
PL/SQL is a very powerful language. It's loosely based on Ada and is a procedural extension to SQL. PL/SQL has been around since 1992, version 6. It supports arrays, collections, user defined types, and many other data structures. Best of all though, Oracle provides a vast library of PL/SQL code. Sending email? Got it. Lower level stuff? UTL_TCP should do the trick. Queueing? Done.
I could go on and on, but you get the point.
If you want to build powerful, robust applications that are low on complexity and high on performance, Oracle's the way to go.
What's that? Oracle licensing is too expensive? How about Oracle Standard Edition (SE) for $5,000? There's not a whole lot you don't get with SE.
Of course you could disagree, but I'd say you're pretty crazy.
What about Oracle as a platform?
Before you get all crazy eyes on me, I'm not talking about eBay, Google or the like. I'm talking about the other 99% of the applications out there.
Database
Oracle is the market leader in database technology. There is a very good reason for that and I'll let you decide on what that reason might be.
Web Server
Either Oracle's Application Server (Apache), which came on the Companion CD on 10g (not sure about 11g), or Oracle's built in XDB HTTP Server (pre-configured in 11g). MOD_PLSQL allows the web server to communicate with the database.
MOD_PLSQL is an Apache (Web Server) extension module that allows one to create dynamic web pages from PL/SQL packages and stored procedures. It is ideal for developing fast and flexible applications that can run on the Internet or an Intranet. MOD_PLSQL was formerly called the Oracle PL/SQL Cartridge and OWA (Oracle Web Agent).Front End
You have 2 choices (that I am aware of) for creating web based applications, Application Express and the HTP/HTF packages. The former is built atop the latter...and I'd definitely use ApEx before the packages.
Oracle Application Express (Oracle APEX) is a rapid web application development tool for the Oracle database. Using only a web browser and limited programming experience, you can develop and deploy professional applications that are both fast and secure.ApEx is, in my experience, one of the most under-used tools out there.
Procedural Language
PL/SQL is a very powerful language. It's loosely based on Ada and is a procedural extension to SQL. PL/SQL has been around since 1992, version 6. It supports arrays, collections, user defined types, and many other data structures. Best of all though, Oracle provides a vast library of PL/SQL code. Sending email? Got it. Lower level stuff? UTL_TCP should do the trick. Queueing? Done.
I could go on and on, but you get the point.
If you want to build powerful, robust applications that are low on complexity and high on performance, Oracle's the way to go.
What's that? Oracle licensing is too expensive? How about Oracle Standard Edition (SE) for $5,000? There's not a whole lot you don't get with SE.
Of course you could disagree, but I'd say you're pretty crazy.
Sunday, March 8, 2009
OBIEE: Retrieve report SQL
Wow, it's been almost 2 months since I asked how to retrieve the SQL from all the Answers reports. Like I said in my previous post, it would be nice to have an Object Dependency button or something like Application Express has.
I started to think this afternoon, the reports are stored in XML files, wouldn't it be easy to just search through them and extract the SQL? Better yet, maybe Catalog Manager has a tool like that.
It does! As I've gotten more comfortable with the environment things seem easier to find.
Once you select Create Report, you'll be prompted for fields you want to view:
Select Request SQL, fill in the path and name of the file you want to write it to:
You'll then see a window that indicates the report is running:
When the report is complete, you'll see the following:
Now just open up the file and browse the SQL:
Of course this won't be easy as there are 250 reports, but it certainly beats going through each individual report, opening it up, viewing the XML and copying the SELECT statements out. Plus, I really only need the Direct SQL Requests so that makes life a little bit easier.
I started to think this afternoon, the reports are stored in XML files, wouldn't it be easy to just search through them and extract the SQL? Better yet, maybe Catalog Manager has a tool like that.
It does! As I've gotten more comfortable with the environment things seem easier to find.
Once you select Create Report, you'll be prompted for fields you want to view:
Select Request SQL, fill in the path and name of the file you want to write it to:
You'll then see a window that indicates the report is running:
When the report is complete, you'll see the following:
Now just open up the file and browse the SQL:
Of course this won't be easy as there are 250 reports, but it certainly beats going through each individual report, opening it up, viewing the XML and copying the SELECT statements out. Plus, I really only need the Direct SQL Requests so that makes life a little bit easier.
Friday, March 6, 2009
Alerts - Scheduler Jobs Did Not Run
The last couple of days our warehouse environment hasn't been updating properly. It's basically a collection of Materialized Views based on the OLTP system. It appears as though many (all?) of our scheduler jobs are not running. DBA believes it has something to do with the Flash Recovery Area running out of space. I don't know and I don't have the time to find a root cause. I do know that there were no errors and the jobs were not in a failed status.
Rather than have the business inform us their reports our not right I was tasked with creating an alert if the jobs did not run. Usually we set up alerts for things that break or actually completed. I think this is the first time I've had to build something to the opposite.
Here's what I came up with:
You'll receive a nice little email noting the JOB_NAME, scheduled NEXT_RUN_DATE and the JOB_ACTION (the anonymous block).
Rather than have the business inform us their reports our not right I was tasked with creating an alert if the jobs did not run. Usually we set up alerts for things that break or actually completed. I think this is the first time I've had to build something to the opposite.
Here's what I came up with:
CREATE OR REPLACEYou can find it here in my Google Code home for DBA Utilities. I do believe that at some future point this will be incorporated into a package, but for now it is a standalone procedure.
PROCEDURE jobs_did_not_run
AS
b VARCHAR2(3) := CHR(10);
TYPE r_table IS RECORD
(
job_name VARCHAR2(30),
next_run_date DATE,
job_action VARCHAR2(4000)
);
TYPE t_table IS TABLE OF R_TABLE;
l_table T_TABLE := T_TABLE();
l_subject VARCHAR2(50) := 'Alert - Jobs have not run';
l_message VARCHAR2(32767);
BEGIN
SELECT job_name, next_run_date, job_action
BULK COLLECT INTO l_table
FROM dba_scheduler_jobs
WHERE state = 'SCHEDULED'
AND enabled = 'TRUE'
AND next_run_date < SYSDATE;
l_message := 'The following jobs have not run today:' || b;
FOR i IN 1..l_table.COUNT LOOP
l_message := l_message || 'Job: ' ||
l_table(i).job_name || b;
l_message := l_message || 'Next Run Date: ' ||
TO_CHAR( l_table(i).next_run_date, 'MM/DD/YYYY HH24:MI:SS' ) || b;
l_message := l_message || 'Action: ' ||
l_table(i).job_action || b;
END LOOP;
utl_mail.send
( sender => '',
recipients => '',
subject => l_subject,
message => l_message );
END jobs_did_not_run;
/
show errors
You'll receive a nice little email noting the JOB_NAME, scheduled NEXT_RUN_DATE and the JOB_ACTION (the anonymous block).
Wednesday, March 4, 2009
Financial Services: Learning the Business
I'm on a quest to find all the information I can about the Financial Service industry...specifically how it works.
I started with CardReport, which besides being pretty ugly, has some fairly useful information.
Of course it led me to much more reliable info sources.
First up, the Federal Reserve. That took me to the Government Printing Office (GPO) where I can then read in excruciating (and exciting!) detail all about Title 12, Banks and Banking.
That takes me to Regulations.gov, which I can then search through thousands of documents online (very nice). Sadly the material isn't all that exciting.
Finally, I found NACHA, The Electronic Payments Association. Here you can learn all about ACH (Automated Clearing House) payments. Joy.
I'm sure glad I've got this on my reading list. If I weren't a software developer, I'd probably go mad.
I started with CardReport, which besides being pretty ugly, has some fairly useful information.
Of course it led me to much more reliable info sources.
First up, the Federal Reserve. That took me to the Government Printing Office (GPO) where I can then read in excruciating (and exciting!) detail all about Title 12, Banks and Banking.
That takes me to Regulations.gov, which I can then search through thousands of documents online (very nice). Sadly the material isn't all that exciting.
Finally, I found NACHA, The Electronic Payments Association. Here you can learn all about ACH (Automated Clearing House) payments. Joy.
I'm sure glad I've got this on my reading list. If I weren't a software developer, I'd probably go mad.
Poor Man's Data Vault
That's what I'm calling it anyway.
Part I.
Part II.
Part III.
I've started the project on Google Code
So check it out if you want. Feedback, both good and bad, is welcome and appreciated. As the Generalissimo stated, I'm a big boy.
Part I.
Part II.
Part III.
I've started the project on Google Code
The goal is simple, have a simple, easy to use version of Oracle's Data Vault. By no means is this trying to replace their product, it's just a simple solution to helping lockdown your Oracle database.I've read through a bit of Oracle's documentation on Data Vault and it seems like pretty cool stuff. I started this as something simple for our environment and was told it resembles (loosely) Data Vault by a friend.
So check it out if you want. Feedback, both good and bad, is welcome and appreciated. As the Generalissimo stated, I'm a big boy.
Marshalling
Learned a new word today, actually, I had heard the word recently but didn't really understand what it meant.
Marshalling
Marshalling
...is the process of transforming the memory representation of an object to a data format suitable for storage or transmission. It is typically used when data must be moved between different parts of a computer program or from one program to another.Funny how I could be in the "business" for 7 years and never have heard the term. Obviously something I missed out on with my Health Education degree.
Sunday, March 1, 2009
Master of My Domain
No, this isn't a reference to The Contest a la Seinfeld. It's a reference to knowing and understanding the industry you work in. So far I've worked in Pharmaceutical Manufacturing (blood bank), Social Services, government health care and now Financial Services.
I've actually worked in Financial Services before when I tried and failed to start up my own company a few years back. Tried to create gift cards that could be used with taxis. I lived in Gainesville, Florida at the time which is a pretty big party town. Let's just say it took me almost 10 years to graduate.
Anyway, I had no idea what I was doing back then (as if I really do now).
I didn't do a very good job of learning the business of blood and that definitely hurt me. I was trying to stay above water on the technical side.
Social Services was a bit better as I would spend a lot of time asking questions.
Government health care, Medicaid, Medicare, PDP, so on and so forth, there's a lot of information out there. I spent most of my time on the technical aspects and didn't have a lot of exposure to the business of government health care plans.
I'm trying to change that now. I think my experience of starting my own business helped me understand many of the (very) basics of credit card processing. But there is so much more I don't know. In that vain (sp?) I'm going to start linking up rules and regulations from the banking industry as well to help me better understand. Hopefully you'll get something out of it too.
So CardReport is my first go. It looks cheesy and all, but it's a start. I'm sure that will lead me to more authoritative sources in the future or maybe I'll find that despite the cheese factor, it's good.
Credit Laws and Banking Regulations
Consumer Credit Overview
I've actually worked in Financial Services before when I tried and failed to start up my own company a few years back. Tried to create gift cards that could be used with taxis. I lived in Gainesville, Florida at the time which is a pretty big party town. Let's just say it took me almost 10 years to graduate.
Anyway, I had no idea what I was doing back then (as if I really do now).
I didn't do a very good job of learning the business of blood and that definitely hurt me. I was trying to stay above water on the technical side.
Social Services was a bit better as I would spend a lot of time asking questions.
Government health care, Medicaid, Medicare, PDP, so on and so forth, there's a lot of information out there. I spent most of my time on the technical aspects and didn't have a lot of exposure to the business of government health care plans.
I'm trying to change that now. I think my experience of starting my own business helped me understand many of the (very) basics of credit card processing. But there is so much more I don't know. In that vain (sp?) I'm going to start linking up rules and regulations from the banking industry as well to help me better understand. Hopefully you'll get something out of it too.
So CardReport is my first go. It looks cheesy and all, but it's a start. I'm sure that will lead me to more authoritative sources in the future or maybe I'll find that despite the cheese factor, it's good.
Credit Laws and Banking Regulations
Consumer Credit Overview
ORA-08177 - II
So my p.i.t.a. DBA won't link back to me, I told him it would be helpful to have both of our perspective's linked up since we're working on the same problem, I'm gonna link to him.
Anyway, last week he was able to pull all the SQL statements from memory that were operating under the SERIALIZABLE isolation level. Pretty cool stuff. You can find his post here.
Also, on Thursday night I ran 4 simultaneous tests of approximately 500 transactions each. While running I found a whole bunch of table locks. Of those tables, I found any ENABLED triggers and DISABLED them. After disabling the 6 triggers, the 2000 or so transactions were successfully completed in about 60 seconds, and the locks on the tables were very brief.
Right before I went to bed I realized though that I had run them in the default mode of READ COMMITTED. I almost got up and re-ran but decided to wait until morning.
Disabling the 6 triggers again and this time doing
On my previous post pertaining to this error, Gary Myers left a well thought out comment that I wanted to highlight here:
Anyway, last week he was able to pull all the SQL statements from memory that were operating under the SERIALIZABLE isolation level. Pretty cool stuff. You can find his post here.
Also, on Thursday night I ran 4 simultaneous tests of approximately 500 transactions each. While running I found a whole bunch of table locks. Of those tables, I found any ENABLED triggers and DISABLED them. After disabling the 6 triggers, the 2000 or so transactions were successfully completed in about 60 seconds, and the locks on the tables were very brief.
Right before I went to bed I realized though that I had run them in the default mode of READ COMMITTED. I almost got up and re-ran but decided to wait until morning.
Disabling the 6 triggers again and this time doing
ALTER SESSION SET ISOLATION_LEVEL = SERIALIZABLE;before each script fired, they all broke with ORA-08177 within seconds.
On my previous post pertaining to this error, Gary Myers left a well thought out comment that I wanted to highlight here:
The basic nugget is that every statement in the transaction happens as if it all happened at the same instant (ie the SCN that the tansaction started). In the default mode, each SQL runs as of the SCN the statement starts executing.I've also mapped out the process that produces the error and it ain't pretty. Most points are spawned off by the various triggers. I believe there is an underlying design flaw here as this particular process should be short and sweet. Some of the triggers firing are maintenance of post-processing data which would probably be served better by an asynchronous process taking care of that.
Wouldn't expect it on a straight SELECT (maybe a SELECT ...FOR UPDATE). The error crops up when you try to get a record in current mode (ie you want to update/delete it) and it has changed since the start of the transaction. In default mode, the statement would lock until that other transaction completed, then restart as of a new SCN. Since in SERIALIZABLE mode it can't use a new SCN, it just falls over.
The 'solutions' are (1) make the transactions complete faster so they are less likely to overlap and (2) grab all the locks you need as early as possible. Shouldn't be trigger related unless they have autonomous transactions.
"usage of serializable isolation level is a design error that leads to non-scalable applications"
Don't agree with the word 'error'. It is a design choice. When you have a transaction of multiple statements, in default mode, the statements do not see a consistent picture of the data - even though each statement is individually consistent. There's a balance between consistency and concurrency. SERIALIZABLE moves that balance a bit more towards consistency.
Subscribe to:
Posts (Atom)