or Death By a Thousand Cuts. I wrote about it before.
Today was one of the worst days I've had at this job.
I got home last night knowing I had to run and test a new line of business in our test environment. This was a fairly small subset so it should run fairly fast, two or three hours. Around 1:30 AM, it completed. I did my quick sanity checks and realized that it just wasn't working right.
Crap, I forgot to load our new rates. That has to be it.
So I reran the rates and then reran our process.
It's now 3:00 AM. I run my sanity check and it's exactly the same as the last one.
I quickly realize that some of the rates hadn't been updated like I assumed they had. This test was null and void. I went to sleep...mad.
I woke up around 8:15 with my son telling me something about Transformers. What? Oh, OK, I'm in his bed. I take him to school and get to work around 9:15.
I'm cranky because the other developer didn't do his part, but probably more mad at myself for not making sure they completed their piece.
I throw a piece of candy. Better.
Of course my colleague isn't in today, so I have to make sure the DBAs deploy this to our test environment.
Finally I get the process working and it finishes around noon. I send out a note to the business and QA letting them know that they can begin.
I get a call from the business around 2:30, it's not working the way they expected. One of our rates overlaps another causing invalid results to return.
Thanks for telling us that sooner. The day before deployment and were getting new requirements. Awesome!
I walk over to my boss' desk with my badge in hand, ready to quit, kind of, to tell him the news. He talks me off the ledge.
We then head over to their building to discuss and indeed it's something they didn't realize. OK, fair enough. Had they seemed to appreciate me/us a little bit more over the past eight months I probably wouldn't have been so mad. It was just one more thing though.
My boss decides how this will be remedied. I argue (gently) that this is not an application issue but a data issue. He agrees. We'll just put one job on hold and make the necessary changes to the [driving] data.
Two IT VPs, one business VP and the CIO need to sign off on two change requests, one for the driving data and one for the application.
We're slated to deploy tomorrow pending UAT signoff, though that shouldn't be a problem.
Ten to twenty percent of my time is spent writing code, the rest is paperwork. I believed I had mastered the administrative stuff only to get this requirement change at the last minute.
One more thing...just one more thing.
Ling Chi indeed.
Friday, November 30, 2007
Thursday, November 29, 2007
Keeping it Simple
One of my all time favorite articles is The Complicator's Gloves on Worse Than Failure (formerly the DailyWTF). It identifies the tendency of software developers in particular to come up with overly complex solutions, usually when there is a much simpler one available.
This was the context of my latest rant to my CIO. Actually, this theme seems to play out in all my rants. Funny how that works.
While web services and the like have their place, many times they are used just because they are the cool new thing, not because of a pressing need. I know I am not the first to mention that nor will I be the last.
Whether it was years of reading asktom (for pleasure no less) or the influence of my first boss, I have striven (sp?) to build applications that are scalable yet easy to maintain.
One of my proudest accomplishments as a developer was at my previous job. A small state-contracted agency where I was the lone developer. I, thanks to a very trusting boss, was allowed to install Oracle and soon after found Application Express (APEX). In 18 months I was able to create some 350 pages of forms and reports for the organization. One person, 350 pages. I once found a job ad for a web developer to help maintain a 100 page website on a team of six. What? Six people? Really? Must be java or something. ;)
I continued to work for them on a contract basis for about six months after I left. Mostly until the new guy got comfortable. Unfortunately for me, they didn't require my services a whole lot. Yes, I could be deluding myself, I realize that...but I just don't believe it. They WOULD tell me.
Back to my point. At our organization we seem to have quite a few architects. They talk of Ruby on Rails, Java, JBoss, etc. MySQL gets a brief mention on occasion.
We have a hard enough time writing good SQL or PL/SQL, so now we're going to introduce new languages and a new database platform?
If we were a company that made software, I will probably be [mostly] on board, but we are not. We store and manage data for the business to do their job.
I do hope I am wrong about them and that they do talk about the importance of data in our organization. I just haven't see it yet.
So, put it in the database, use APEX when appropriate (95% of the time) and keep it simple.
This was the context of my latest rant to my CIO. Actually, this theme seems to play out in all my rants. Funny how that works.
While web services and the like have their place, many times they are used just because they are the cool new thing, not because of a pressing need. I know I am not the first to mention that nor will I be the last.
Whether it was years of reading asktom (for pleasure no less) or the influence of my first boss, I have striven (sp?) to build applications that are scalable yet easy to maintain.
One of my proudest accomplishments as a developer was at my previous job. A small state-contracted agency where I was the lone developer. I, thanks to a very trusting boss, was allowed to install Oracle and soon after found Application Express (APEX). In 18 months I was able to create some 350 pages of forms and reports for the organization. One person, 350 pages. I once found a job ad for a web developer to help maintain a 100 page website on a team of six. What? Six people? Really? Must be java or something. ;)
I continued to work for them on a contract basis for about six months after I left. Mostly until the new guy got comfortable. Unfortunately for me, they didn't require my services a whole lot. Yes, I could be deluding myself, I realize that...but I just don't believe it. They WOULD tell me.
Back to my point. At our organization we seem to have quite a few architects. They talk of Ruby on Rails, Java, JBoss, etc. MySQL gets a brief mention on occasion.
We have a hard enough time writing good SQL or PL/SQL, so now we're going to introduce new languages and a new database platform?
If we were a company that made software, I will probably be [mostly] on board, but we are not. We store and manage data for the business to do their job.
I do hope I am wrong about them and that they do talk about the importance of data in our organization. I just haven't see it yet.
So, put it in the database, use APEX when appropriate (95% of the time) and keep it simple.
Monday, November 26, 2007
Oops I've Done it Again
By it, I mean I've sent another rant to my CIO. Here's my first one that I sent to Dratz before starting my own blog.
Fortunately, we've developed a bit of a rapport. I still should not do this type of thing on a holiday when it could be days before I hear back...it just makes my mind wander and wonder if I will have a job come Monday.
I still have a job...I really need to stop this.
Fortunately, we've developed a bit of a rapport. I still should not do this type of thing on a holiday when it could be days before I hear back...it just makes my mind wander and wonder if I will have a job come Monday.
Update
I still have a job...I really need to stop this.
Saturday, November 24, 2007
Parallel Processing using DBMS_JOB
I found this article through the OraNA feed by ProdLife which talked about running a report that was based on multiple queries. It reminded me of something I did awhile back.
We have this multi-step process which loads data into 2 tables that the business would use to reconcile our money in the door and our membership. Membership is on a month granularity (member month) and our money is transactional (they may have multiple transactions within a given month).
One table stores the transactions joined with our members. Not the correct grain that the business needs but useful for research. The other table summarizes the transactions to the month level and then is joined with our membership so that both are at the same granularity. Currently we're pulling across about 27 million records for members and the same for their transactions.
On the right is a basic diagram of the process.
The process initially took 8 hours to complete. Part of it was the fact that it runs
sequentially. However, not all parts of this process are dependent on one another. It isn't until the final 2 steps (Target Table 1 and Target Table 2, in yellow) that they need to run sequentially.
I wanted to speed this up and began thinking about the ways to do this (assuming as much tuning as possible had already completed).
1. I could use our scheduler or unix shell scripts.
2. Use a table based approach as ProdLife did.
3. Utilize PL/SQL and DBMS_JOB.
I chose number 3 initially and that's the focus of this post. I'll detail why I didn't use this method at the end.
The first thing I had to figure out was how to get PL/SQL to wait. Having read a few
posts on AskTom I remembered the SLEEP procedure. After a quick scan of the site, I found that it was part of the DBMS_LOCK package. I asked the DBAs to give me access so that I could being testing.
I figured that if I could wait long enough, it would be easy to "poll" the USER_JOBS
view to see when it had finished. I'm just going to show code snippets as the whole thing can get quite long.
I first determined that the error returned from Oracle for a job not there is -23241.
That will let me know when it is complete. Next, I declared variables for each job to run.
First thing I do in the body is create the jobs using DBMS_JOB.SUBMIT.
Make sure you issue the COMMIT statement after the jobs have been submitted.
Here's the fun part. I created a loop that would call DBMS_LOCK.SLEEP and wait for 60 seconds. After the wait has ended, I check to see whether that job remains in the USER_JOBS table. This allows the jobs to complete in 100 minutes.
The next step is to determine when to exit the loop. Hopefully, the jobs will finish in time and move on to the next, but if not, you want to exit gracefully. Well, semi-gracefully anyway.
That's all there is to it.
In the end though, I was convinced not to use this method as restartability would be difficult. Perhaps this method combined with the table-based approach would be the ideal. I'll leave that for another day though.
We have this multi-step process which loads data into 2 tables that the business would use to reconcile our money in the door and our membership. Membership is on a month granularity (member month) and our money is transactional (they may have multiple transactions within a given month).
One table stores the transactions joined with our members. Not the correct grain that the business needs but useful for research. The other table summarizes the transactions to the month level and then is joined with our membership so that both are at the same granularity. Currently we're pulling across about 27 million records for members and the same for their transactions.
On the right is a basic diagram of the process.
The process initially took 8 hours to complete. Part of it was the fact that it runs
sequentially. However, not all parts of this process are dependent on one another. It isn't until the final 2 steps (Target Table 1 and Target Table 2, in yellow) that they need to run sequentially.
I wanted to speed this up and began thinking about the ways to do this (assuming as much tuning as possible had already completed).
1. I could use our scheduler or unix shell scripts.
2. Use a table based approach as ProdLife did.
3. Utilize PL/SQL and DBMS_JOB.
I chose number 3 initially and that's the focus of this post. I'll detail why I didn't use this method at the end.
The first thing I had to figure out was how to get PL/SQL to wait. Having read a few
posts on AskTom I remembered the SLEEP procedure. After a quick scan of the site, I found that it was part of the DBMS_LOCK package. I asked the DBAs to give me access so that I could being testing.
I figured that if I could wait long enough, it would be easy to "poll" the USER_JOBS
view to see when it had finished. I'm just going to show code snippets as the whole thing can get quite long.
I first determined that the error returned from Oracle for a job not there is -23241.
That will let me know when it is complete. Next, I declared variables for each job to run.
DECLARE
no_job EXCEPTION;
PRAGMA EXCEPTION_INIT( no_job, -23421 );
l_exists NUMBER;
l_dollars_job NUMBER;
l_members_job NUMBER;
First thing I do in the body is create the jobs using DBMS_JOB.SUBMIT.
BEGIN
dbms_job.submit
( job => l_dollars_job,
what => 'BEGIN p_mypackage.get_dollars; COMMIT; END;',
next_date => SYSDATE );
dbms_job.submit
( job => l_members_job,
what => 'BEGIN p_mypackage.get_members; COMMIT; END;',
next_date => SYSDATE );
COMMIT;
Make sure you issue the COMMIT statement after the jobs have been submitted.
Here's the fun part. I created a loop that would call DBMS_LOCK.SLEEP and wait for 60 seconds. After the wait has ended, I check to see whether that job remains in the USER_JOBS table. This allows the jobs to complete in 100 minutes.
FOR i IN 1..100 LOOP
dbms_lock.sleep( 60 );
IF l_dollars_job IS NOT NULL THEN
BEGIN
SELECT 1
INTO l_exists
FROM user_jobs
WHERE job = l_dollars_job;
l_exists := NULL;
EXCEPTION
WHEN no_data_found THEN
l_dollars_job := NULL;--job is finished
END;
END IF;
IF l_members_job IS NOT NULL THEN
BEGIN
SELECT 1
INTO l_exists
FROM user_jobs
WHERE job = l_members_job;
l_exists := NULL;
EXCEPTION
WHEN no_data_found THEN
l_members_job := NULL;--job is finished
END;
END IF;
The next step is to determine when to exit the loop. Hopefully, the jobs will finish in time and move on to the next, but if not, you want to exit gracefully. Well, semi-gracefully anyway.
IF l_dollars_job IS NULL
AND l_members_job IS NULL
THEN
EXIT;
ELSIF i = 100 THEN
BEGIN
dbms_job.remove( l_dollars_job );
EXCEPTION
WHEN no_job THEN
NULL;
END;
BEGIN
dbms_job.remove( l_members_job );
EXCEPTION
WHEN no_job THEN
NULL;
END;
--abort run, taking too long
raise_application_error( -20001, 'DOLLARS/MEMBERS data from not loaded timely...' );
END IF;
END LOOP;
END;
That's all there is to it.
In the end though, I was convinced not to use this method as restartability would be difficult. Perhaps this method combined with the table-based approach would be the ideal. I'll leave that for another day though.
Tuesday, November 13, 2007
Death By a Thousand Cuts
In case I haven't mentioned it before, I work for a fairly immature IT organization. We're heading in the right direction mind you, but defined and documented processes don't really exist.
Yesterday I went back and forth with our QA department about two truncate table statements. We had performed this in production before and we didn't change the code. It needed to be tested and I needed to do a unit test plan. Today I needed UAT signoff. What? There's no business user.
Our EDI group is overtaxed right now, so I wrote a process to delete data in "their" database on tables that they created. Part of that process was to drop the constraints and then recreate them with the ON DELETE CASCADE rule. Amusingly, they created most of them with the SET NULL rule and then had NOT NULL constraints on those columms. Sweet. I needed to do all this and finally delete the data, of course none of the environments (DEV/TEST/PROD) are the same, so what worked in DEV didn't work in TEST. I'm guessing that as soon as it is deployed in PROD, I will have missed something else. I'll look like the idiot too. Did I mention that I have to create indexes for all those foreign keys? No, well, I did. Otherwise the process would take days to complete. There are some 13 tables in total. Three child tables of the parent table and then multiple child tables of those child tables. Awesome!
I'm on my fifth day of trying to complete a "simple" delete now.
Since the FBI raided, QA has become more strict. OK, understandable, but there was never a notice saying so, nor what the new rules would be.
I finally get my final approval and the work orders are created. Release Management gives me the all clear (the files were staged for the DBAs). I ran over to let them know and he wants to go through the instructions before I leave to make sure everythings clear. We then go into the staging folder and only two of the eight files are there. Deep breath. I call the Release Management group and let them know. It will be an hour before they get home.
I know that was an honest mistake. When I went back to make sure I filled out all the forms correctly I realized that I had left one file off of my build list, so I originally only had seven.
DBA just called and he was slammed with other critical deployments/production problems and wasn't able to get to it tonight. It would happen in the morning.
A "simple" delete will have taken seven days to complete.
Slow slicing indeed...
Yesterday I went back and forth with our QA department about two truncate table statements. We had performed this in production before and we didn't change the code. It needed to be tested and I needed to do a unit test plan. Today I needed UAT signoff. What? There's no business user.
Our EDI group is overtaxed right now, so I wrote a process to delete data in "their" database on tables that they created. Part of that process was to drop the constraints and then recreate them with the ON DELETE CASCADE rule. Amusingly, they created most of them with the SET NULL rule and then had NOT NULL constraints on those columms. Sweet. I needed to do all this and finally delete the data, of course none of the environments (DEV/TEST/PROD) are the same, so what worked in DEV didn't work in TEST. I'm guessing that as soon as it is deployed in PROD, I will have missed something else. I'll look like the idiot too. Did I mention that I have to create indexes for all those foreign keys? No, well, I did. Otherwise the process would take days to complete. There are some 13 tables in total. Three child tables of the parent table and then multiple child tables of those child tables. Awesome!
I'm on my fifth day of trying to complete a "simple" delete now.
Since the FBI raided, QA has become more strict. OK, understandable, but there was never a notice saying so, nor what the new rules would be.
I finally get my final approval and the work orders are created. Release Management gives me the all clear (the files were staged for the DBAs). I ran over to let them know and he wants to go through the instructions before I leave to make sure everythings clear. We then go into the staging folder and only two of the eight files are there. Deep breath. I call the Release Management group and let them know. It will be an hour before they get home.
I know that was an honest mistake. When I went back to make sure I filled out all the forms correctly I realized that I had left one file off of my build list, so I originally only had seven.
DBA just called and he was slammed with other critical deployments/production problems and wasn't able to get to it tonight. It would happen in the morning.
A "simple" delete will have taken seven days to complete.
Slow slicing indeed...
Monday, November 5, 2007
I Want to Be Better Than Tom Kyte
OK, that got your attention. Somehow I knew it would.
I believe Mr. Kyte to be one of the foremost experts at Oracle development. His solutions are usually simple and concise. His philosophy is simple and concise; logic belongs with the data (in the database), don't reinvent the wheel if we've already created it (using supplied packages) and keep it simple. Of course Mr. Kyte may have objections to some of that, but that's the general idea I have gleaned over that past 5 years.
2. I grew up playing baseball, I liked being better than most of the other kids. I still believe that if I hadn't drank away my opportunity in college, I'd still be playing.
3. I'm an only child, I'm used to the attention and crave it. How do I get it now? By being better than everyone else.
Perhaps I should start small...be the best Oracle developer at WellCare, then Tampa, Florida, the U.S, North America, Northern Hemisphere and finally the World!
I am probably not the best Oracle developer at WellCare, so I have a ways to go. That's what drives me though. Trying to be the best. I'm surrounded by a lot of smart people which is a good thing. No...a great thing. I've been the lone wolf developer for too long. Now I have the opportunity to learn directly (as opposed to just reading) from others. There is give and take. Sometimes my solution is the best and sometimes it is not.
I don't believe my competitive nature interferes with my interpersonal relationships (I hope not anyway). It is more of an internal thing to me. Once upon a time I was skinny and in shape and I did triathlons. I wanted to be a pro (laughable). Each time though I tried to outdo my previous performance. Did I want to win? Sure I did, but it was more important for me to improve.
I believe that I am strong enough to take criticism from others. I can admit when I'm wrong (see countdown timer above).
I do want to be the best. I'll probably never have the opportunity (nor the time) to do what Mr. Kyte has done. I'm not going to strive to be mediocre though. Whether I realize that goal or not is mainly irrelevant, but that is my goal...to be better than Mr. Kyte.
I believe Mr. Kyte to be one of the foremost experts at Oracle development. His solutions are usually simple and concise. His philosophy is simple and concise; logic belongs with the data (in the database), don't reinvent the wheel if we've already created it (using supplied packages) and keep it simple. Of course Mr. Kyte may have objections to some of that, but that's the general idea I have gleaned over that past 5 years.
I am very competitive
1. One of the reasons I got into IT in the first place was that I didn't like this whole group of people knowing more than I did.2. I grew up playing baseball, I liked being better than most of the other kids. I still believe that if I hadn't drank away my opportunity in college, I'd still be playing.
3. I'm an only child, I'm used to the attention and crave it. How do I get it now? By being better than everyone else.
My goal is to be the best developer in the world
Will I ever achieve that? Could it even be measured? Is there some sort of Oracle developer competition out there?Perhaps I should start small...be the best Oracle developer at WellCare, then Tampa, Florida, the U.S, North America, Northern Hemisphere and finally the World!
I am probably not the best Oracle developer at WellCare, so I have a ways to go. That's what drives me though. Trying to be the best. I'm surrounded by a lot of smart people which is a good thing. No...a great thing. I've been the lone wolf developer for too long. Now I have the opportunity to learn directly (as opposed to just reading) from others. There is give and take. Sometimes my solution is the best and sometimes it is not.
I don't believe my competitive nature interferes with my interpersonal relationships (I hope not anyway). It is more of an internal thing to me. Once upon a time I was skinny and in shape and I did triathlons. I wanted to be a pro (laughable). Each time though I tried to outdo my previous performance. Did I want to win? Sure I did, but it was more important for me to improve.
I believe that I am strong enough to take criticism from others. I can admit when I'm wrong (see countdown timer above).
I do want to be the best. I'll probably never have the opportunity (nor the time) to do what Mr. Kyte has done. I'm not going to strive to be mediocre though. Whether I realize that goal or not is mainly irrelevant, but that is my goal...to be better than Mr. Kyte.
Subscribe to:
Posts (Atom)