Tuesday, December 30, 2008
Countdown Timer Part II
In that vain, I've created a new one marking the consecutive time I've been employed! As noted in my previous post, I've had difficulty holding on to jobs this past year. Three times in fact. Fired once, laid off twice and brought back twice (by the same company).
Here's to not having to reset this effin clock in 2009!
Cheers!
SQL Objects vs. PL/SQL Tables
After getting it working, I then began to port it to packaged code, which is where I ran into a little problem.
Here's the initial statement:
DECLARERuns fine.
TYPE r_records IS RECORD
(
owner VARCHAR2(30),
package_name VARCHAR2(30),
procedure_name VARCHAR2(30),
overload VARCHAR2(40),
argument_name VARCHAR2(30),
sequence INTEGER,
in_out VARCHAR2(9)
);
TYPE t_records IS TABLE OF R_RECORDS INDEX BY BINARY_INTEGER;
l_records T_RECORDS;
BEGIN
SELECT
owner,
package_name,
object_name,
overload,
argument_name,
sequence,
in_out
BULK COLLECT INTO l_records
FROM dba_arguments
WHERE package_name = 'DBMS_UTILITY'
AND object_name IN ( 'COMPILE_SCHEMA', 'INVALIDATE' )
ORDER BY owner, package_name, object_name, position;
END;
/
My first step to refactor was to use the TYPE declaration in the package header. But you can't do that (little rusty on pl/sql tables). So I created SQL Objects or User Defined Types (UDT).
CREATE TYPE r_procedure_arguments AS OBJECTThen I rewrote the anonymous block to use the UDT.
(
owner VARCHAR2(30),
package_name VARCHAR2(30),
procedure_name VARCHAR2(128),
overload VARCHAR2(40),
argument_name VARCHAR2(30),
position INTEGER,
sequence INTEGER,
in_out VARCHAR2(9)
);
/
show errors
CREATE TYPE t_procedure_arguments AS TABLE OF R_PROCEDURE_ARGUMENTS;
/
show errors
CJUSTICE@ELEVEN>DECLAREnot enough values? OK, let's go through it. There are 8 "columns" in the defined object and 8 in the SELECT clause. Hmmm...I tried adding some extra values at the end:
2 l_records T_PROCEDURE_ARGUMENTS;
3 BEGIN
4 SELECT
5 owner,
6 package_name,
7 object_name,
8 overload,
9 argument_name,
10 position,
11 sequence,
12 in_out
13 BULK COLLECT INTO l_records
14 FROM all_arguments;
15 END;
16 /
FROM all_arguments;
*
ERROR at line 14:
ORA-06550: line 14, column 3:
PL/SQL: ORA-00947: not enough values
ORA-06550: line 4, column 3:
PL/SQL: SQL Statement ignored
DECLARESame result, not enough values.
l_records T_PROCEDURE_ARGUMENTS;
BEGIN
SELECT
owner,
package_name,
object_name,
overload,
argument_name,
position,
sequence,
in_out, 1, 2, 3, 4
BULK COLLECT INTO l_records
FROM all_arguments;
END;
/
I spent the next hour toggling between gmail, facebook and meebo thinking about the problem.
Wait! You have to "cast" the values from the SELECT clause!
CJUSTICE@ELEVEN>DECLAREDuh.
2 l_records T_PROCEDURE_ARGUMENTS;
3 BEGIN
4 SELECT
5 R_PROCEDURE_ARGUMENTS( owner,
6 package_name,
7 object_name,
8 overload,
9 argument_name,
10 position,
11 sequence,
12 in_out )
13 BULK COLLECT INTO l_records
14 FROM all_arguments;
15 END;
16 /
PL/SQL procedure successfully completed.
So I post it here so next time I don't waste an hour trying to remember why I can't get it to work.
Monday, December 29, 2008
This is the LAST time...
Wow is really all I can say. Perhaps I should write a book or something? These past couple of years certainly qualify as book worthy. :)
Saturday, December 27, 2008
The Tampa Timeline
November 27, 2006 - Begin work at WellCare as a contractor.
December 2, 2006 - Closed on Gainesville house
December 17, 2006 - Purchased Tampa house
December 26, 2006 - Rolled into a fulltime position at WellCare in the Datawarehouse group
January 4, 2007 - New CIO announced at WellCare
April 21, 2007 - Kate gives the Reaper the finger Summer 2007 - Awarded on-the-spot bonus, only to be *PIPped 2 days later
August 6, 2007 - Sent a "manifesto" to my CIO August 14, 2007 - Started this blog
October 24, 2007 - FBI Raids WellCare with 200 agents
January 26, 2008 - CEO and CFO resign from WellCare
April 24, 2008 - Gave my first professional presentation on Application Express
May 22, 2008 - I write a short statement on my blog that WellCare is doing layoffs
May 27, 2008 - I am fired from WellCare, with cause, for posting company "confidential" information
May 31, 2008 - I get my official offer from Revolution Money as a contractor June 15, 2008 - My paternal Grandfather passes away at 89
July 27, 2008 - I'm let go from Revolution Money
September 2, 2008 - I start as a fulltime employee at Revolution Money
December 26, 2008 - I'm furloughed from Revolution Money
December 29, 2008 - I'm asked to come back to Revolution Money.
April 8, 2009 - I resign and am then escorted from the premises.
April 10, 2009 - I start my new job.
October 16, 2009 - I get
November 3, 2009 - Became an Oracle ACE
What have I learned in this time?
1. Don't go into the office. If you boss says he needs a word, say you have to go to the bathroom and run for your car.
2. Leave early. If layoffs are coming and you aren't there, they can't let you go!
3. Don't check email while on vacation.
4. Life is rough sometimes, but you have to roll with it.
5. Have fun. We spend more time with those at work than we do with our families. Enjoy yourself while you are there. I got to play ping pong everyday (after 5 naturally) at Revolution Money with my Arch Nemesis. I won two times...out of a thousand games. But it was fun.
*Performance Improvement Plan
Friday, December 26, 2008
Looking...Again!
Tuesday, December 23, 2008
Exception Handling WTF
Trying to remove references for a table that no longer exists (a view was created in it's place), I found this little gem:
OPEN c FOR
SELECT 'INVALID USER'
FROM dual;
RETURN c;
It was nested in an IF-END IF block.
Instead of just throwing the exception or return 0 (zero) rows, this was the preferred method.
Awesome!
Monday, December 15, 2008
DDL Triggers and Security
We're trying to move to a named account approach (for developers too) and the only way to do that is to grant system level privileges. That's not good.
I had read about DDL triggers in the past but wasn't sure exactly how they could be used. Well, let's just say you can do quite a lot. There are database level events and client level events. The database level events include: STARTUP, SHUTDOWN, DB_ROLE_CHANT and SERVERERROR. A few of the client level events include (BEFORE and AFTER): DDL, LOGOFF, LOGON, GRANT, DROP, CREATE, ALTER, etc.
Being a developer my entire career (with certain DBA skills), I had never quite wandered into this territory. I was rather impressed.
The three I thought would make the most sense were ALTER, DROP and CREATE.
If you have system level privileges my biggest concern would be CREATE ANY PROCEDURE and EXECUTE ANY PROCEDURE (though the latter is not one that is currently granted). I've used those 2 privileges to get the DBA role in the past.
Here's what I need to do:
1. Does the user have the DBA role?
SELECT grantee
INTO l_grantee
FROM dba_role_privs
WHERE grantee = ora_login_user
AND granted_role = 'DBA';
If they do have the role, it exits right then.
If not,
2. Where is the user trying to create/drop/alter an object? Is it in an approved schema? Their own?
3. raise_application_error if it's not approved or not their own object
Easy enough:
CREATE OR REPLACEIn preparation for this post I found two very similar articles by Arup Nanda and Tom Kyte. I think they've trained me well.
TRIGGER system.no_create_in_dba_schema
BEFORE CREATE
ON DATABASE
DECLARE
l_user VARCHAR2(30) := ora_login_user;
l_owner VARCHAR2(30) := ora_dict_obj_owner;
l_object_name VARCHAR2(128) := ora_dict_obj_name;
l_dba VARCHAR2(30);
l_can_create BOOLEAN;
BEGIN
--is DBA?
SELECT grantee
INTO l_dba
FROM dba_role_privs
WHERE granted_role = 'DBA'
AND grantee = l_user;
EXCEPTION
WHEN no_data_found THEN
l_can_create := can_user_create_alter_drop
( p_user => l_user,
p_owner => l_owner );
IF l_can_create = FALSE THEN
raise_application_error( -20001, 'cannot CREATE objects in ' || l_owner );
END IF;
END no_create_in_dba_schema;
/
show errors
Tuesday, December 9, 2008
Application Developers vs. Database Developers: Part II
Mr. M:
OH YEAH BABY!!! TEN TIMES FASTER!!!! YEAH!!!!!!!!This was prompted by a recent Oracle email blast about the Exadata storage system/Warehouse.
Hey seriously, what a tub of shit Oracle is. Where does this myth come from that it's such a great platform? Their client tools suck balls and it's generally just a pain in the ass to work with from a developer's point of view. But devs for some reason are under this impression that from thew server perspective it's rock solid and performant. Well, it may be solid, but it's a fucking turd. Our dba here - definitely an Oracle guy - has been tasked with looking into moving us onto an oss db. He basically refuses to even look at MySQL, stating that it's a mickey mouse worthless pile of shit (can't really argue with him there lol), so that basically leaves Postgres. So it's been a few weeks now, and he will basically admit now that Postgres completely waxes Oracle as far as performance goes. LOL We run RAC in production too. He's looking at failover, replication, blah blah blah now, we'll see what the verdict on that is. Oh, and Oracle AQ? That's a worthless pile of shit too. Why do they tack a fucking message queue that doesn't play nice with jms onto the fucking database, which is probably already overworked? Oh wait, that's right, they're in the business of selling per cpu licenses! Cocksuckers.
As I did before, I'll just put the email here.
Me:
Agreed, their client tools aren't all the great. Which ones are you using?Mr. M:
I use SQL*Plus (naturally), SQL Developer and JDeveloper. The latter 2 tie in very nicely with Subversion. With JDeveloper, when I want to try out Java, it's got a pretty good suite of tools.
Oracle starting out catering to businesses, Microsoft started with the consumer. Oracle has made pretty good strides in making their software more usable while Microsoft has made theirs more scalable.
I haven't used AQ a whole lot and definitely not with Java. I do know that it's billed as JMS compliant.
Postgres has it's place and so does Oracle. It is a great platform if you know how to leverage it. PL/SQL is a 3GL (or is it 4) and you can write entire applications using it...and if you do that right, an incredibly robust and scalable application at that.
This was followed by a package he had been working on. I wouldn't say it was the greatest, but it wasn't all bad either.
"It is a great platform if you know how to leverage it. PL/SQL is a 3GL (or is it 4) and you can write entire applications using it...and if you do that right, an incredibly robust and scalable application at that."
NO!!! NO!!! NOOOOO!!!
I want to beat people like you who say this with a ball pean hammer. There are only a select few individuals on this earth who can write and read application logic written in SQL. AVOID THIS ANTI-PATTERN AT ALL COSTS! What is it with you f_cking database guys??? Why do you always want to stuff as much crap into the db as possible?
DUDE! SQL IS A PILE OF SHIT FOR EXPRESSING APPLICATION LOGIC!!!
It's fine for querying and manipulating sets of data, in a relational database. But it is a worthless sack of shit for expressing application logic!
I'm having to dig through this f_cking abortion right now because some Oracle f_ckhead thought "you can write entire applications using it!" Blog that, mofo!
Me:
goodness gracious.Of course note the use of "naturally" in my lexicon. Thanks Jake.
"DUDE! SQL IS A PILE OF SHIT FOR EXPRESSING APPLICATION LOGIC!!!"
disagree (naturally). It's incredibly easy to do, you just don't know how yet...and it seems even the Oracle professionals out there don't either.
I'll tell you this, the crazier the SQL or PL/SQL needed to derive and manipulate data the poorer the design. Start with a good design and it all becomes very simple.
Mr. M:
well dude, we are back to our old discussion - you arguing that procedural sql code is perfectly fine for building apps, and by extension, that the last 20 years of computer science have basically been a misguided lost journey down the meandering, fruitless trail of oop. um.....no. select this from that. otherwise keep that sql crap caged up where it belongs.Me:
btw, do the db guys here suck? seriously. i'm not competent enough to judge. (to be fair, apparently that crap i sent you is fairly old stuff, but still....)
I would say, based on limited knowledge of software development, that the OOP movement was started because the database (specifically Oracle) was not mature enough to do what was needed. Plus, I seem to recall that the OOP movement was supposed to have solved all the world's problems by now.Me (again):
It's further propogated due to the needs you specified in our discussion that day at WellCare (i.e. performance). I still believe that if application developers better understood how a DB works (Oracle, MySQL, SQLServer, etc) they would end up writing less code. Database constraints alone force you to write less (and better) code simultaneously ensuring good data.
The code that I did look at (first 1000 lines or so) isn't great.
1. With all those IF THEN ELSE statements it's telling me that there's probably a better way to store the data. Looks like they're missing an attribute that should be contained with a track.
2. using Object (PL/SQL) types to store data in the way they seem to be doing it is not the best way. Again, probably a design issue.
3. When you do something like this:
UPDATE pb_album_metadata
SET primary_digital_flag = 0
WHERE album_id IN (SELECT b.album_id
FROM (SELECT a.album_id AS album_id,
MAX(a.album_id) OVER (PARTITION BY a.standard_upc) AS latest_album_id
FROM pb_album_metadata a
WHERE a.standard_upc = g_album_tab(1).standard_upc ) b
WHERE b.album_id <> b.latest_album_id )
AND primary_digital_flag <> 0;
They should probably have considered end_date as an attribute of the album metadata. While analytic functions are pretty cool, they're more for analyzing (OLAP) and not OLTP environments.
That's for starters and without table definitions...
oh yeah...and PL/SQL is/was built on top of ADA, FYI.Mr. M:
"I still believe that if application developers better understood how a DB works (Oracle, MySQL, SQLServer, etc) they would end up writing less code. Database constraints alone force you to write less (and better) code simultaneously ensuring good data."Me:
Huh? What are we not understanding? What would be an example of a constraint that would force us to write less and better code?
CHECK, NOT NULL (same as CHECK) and FOREIGN KEY constraints all fit into that category.Mr. M:
Instead of having to check if a value is NULL when submitting it to the database, just submit and let the database throw an error, which you'll conveniently capture in your Exception block. Same goes for CHECK constraints, columnA should be either one of three values (test, test1, test3), you won't have to check that in your application code, just catch the exception. FKs insure good data (proper relationships).
A different perspective. If you are going to pay for it, use that damn thing. If you don't care, don't do it. But over the course of an "enterprisey" application, you'll end up writing more code and make maintenance that much more difficult (did I do it there?). Just those constraints will force you and the teams of application developers to put good data into the database.
You can still do it in the application of course (form validation is a good place)...
Ahh, jeez dude, I wasn't sure if you were referring to the literal "constraint" or not.Me:
Dude, even f_cksticks likeredactedI think have a decent enough understanding of when and how to use db constraints. It's when you get into things like cursors or cost plans of subselects and anonymous tables (i think that's the name for it - where I say select something from (select some other crap). Then we defer to db gurus like yourself.
But dude....."you won't have to check that in your application code, just catch the exception".......uh, don't ever repeat that off our little email group. And definitely don't go posting that on your blog. F_ck me man, it's a damn good thing we keep you db folks caged up in that rdbms box....
So we've had this discussion at work...for a high transaction system, do Java/C/etc handle exceptions well or what?Mr. M:
Why is it bad to deal with exceptions rather than coding to avoid them?
I highly doubt evenredactedunderstood database constraints...him and his cohorts believed all database software would be commoditized and MySQL would be king in short order.
"for a high transaction system"I was laughing at this point because the link above points to one of our consulting architects (I'm not really sure what his role is at this point).
Or for any system really....
To use your example of check constraints (is this value one of....) or not null constraints, checking these rules in the Java code and in the database code would seem to violate DRY. But we do that alot, and it is acceptable in certain cases. For instance, we also probably violate DRY if we're checking this same rule say in Javascript on the front end. But we accept this tiny violation of DRY because it tends to make for a better user experience and as a performance gain too, for we avoid the round trip to the server. Now, what your advocating here is close to the same thing. You're basically saying, don't check for the not null constraint in Java code, just go ahead and hit the database, let the database throw back an exception to the Java code, presumably correct the problem, and then make another roundtrip to the database again. Dude, what are you thinking?!? This to say nothing of the fact that this also could be considered a violation of Fail Fast, and a violation of Defensive Programming - what happens if the dba forgot to add the not null constraint in production?
Dude, listen to this guy. For a "high transaction system" basically you want to treat the database, as much as you can, as just a dumb data holder. A f_cking dumpster, that you just throw sh_t into and pull shit back out, using no vendor-specific features if at all possible.
Again, for we've had this discussion, but even in everyday apps, not just on Wall Street, the database is the bottleneck. And the database is the hardest layer to scale. So given those facts, you should design your app to do as little work in the database as possible.
Me:
i agree in any application that you want to minimize the number of round trips...Mr. M:
shocker...he's one of our architects. he's spot on in many instances, but...
database is the bottleneck because people don't know how to write SQL. I'll certainly concede the wall street applications (for the time being anyway), but the rest of us with what we do on a daily basis...Oracle will scale far beyond the demands they place. When that bottleneck shows up, 9 times out of 10 some dumb-ass c#/java guy thought he could write a better query than i. besides, what's the idiot doing anything but a simple join anyway? probably poor data model to start with...and we're right back where we started (sort of).
"database is the bottleneck because people don't know how to write SQL.....some dumb-ass c#/java guy thought he could write a better query than i."Mr. V (note, not Mr. M):
Dude, I'll grant you, people don't know how to write SQL, myself included. But that's not always why the database is the bottleneck. I think it's safe to say that's not even the majority of the reason. Yes, there are apps written by people who were just idiots, and they needlessly pummel the database, but that's definitely not the majority of scenarios. At my work the database is the bottleneck, and we run RAC in production. It's my understanding that even with RAC, there is a limit to how much you can scale that out. But any layer up from the database we are basically unlimited in how much we can scale that out. So it's very easy to stick another Apache box in front, or fire up another Weblogic server. But I can't do that with the database. We have 24 Weblogic servers forredacted. The database is the bottleneck. And we don't have shitty sql code in the app. In fact, we have very few hand-written queries anywhere in the app. Persisting something to a database is really a low-level concern that as an application developer I shouldn't even have to be bothered with, except for the rare corner cases where the persistence abstraction I'm using is just a little bit too generic to handle things effectively. And we don't use these ORMs because we don't know how to write sql. To effectively use an ORM tool requires a deeper understanding of sql and databases than simply being able to write solid SQL code. (Not saying Java devs who use ORMs know SQL better than a dba, just that it requires better sql skills than is required of a Java dev to simply write JDBC/SQL.) Now, before you try to tell me that my ORM library isn't able to write as efficient of sql code as a dba of your caliber, keep in mind that ORM tools are pretty advanced. They're able to intelligently do things like batch sql updates, and let you apply transactional semantics much easier than with raw jdbc/sql. But the overwhelming reason developers have so thoroughly adopted ORM is because Structured Query Language is such a nasty piece of shit for expressing application logic. SQL is a declarative, procedural language. It's totally unsuited for writing application logic! This, more than anything else, is why organizations and dev teams should seek to restrict what is handled within a relational database as much as possible - because the programming interface to it is a fucking ancient backward dying dinosaur.
My 2 canadian cents:Mr. V, I believe, is a little bit more sensible. Mr. M on the other hand is just trying to rile (sp?) me up.
The polyglot approach "... use different languages for different domain ..."
Database was developed to manipulate data and should remain there.
General purpose language was developed to encapsulate logic and should remain in that domain.
You should not use DB to encapsulate business logic (in my opinion) no more than you would use HTML to create complex logic.
While Java, C#, Python, etc are described as General Purpose languages, they, too, are really domain-constrained. Their expressiveness are confined (and should be) to express low-level, tersed, explicit, verbose, and repetive logic (if that makes any sense). Languages such as these are more suitable for low-level abstraction on top of which, richer, more expressive languages can be built. We are seeing this now with the emergence of languages on the JVM (and arguably on the .Net's CLR).
I think SQL as we know will take a back seat and a smart developer somewhere will create a new domain-specific language that lives on the VM and will push the SQL expression out of the RDBMS and closer to the code that needs it. We are not there yet, but Microsfot is trying (see LINQ and all ORM papers). This is not to say that there won't be isntances where tuning in the SQL-native language won't be necessary. However, 80 to 90% of simple CRUD cases will be handled closer to the logic code that uses the data code.
Again, that's my 2 canadian cents... I could go on. But I have a meeting withredacted.
Me:
Someone will probably create something like that, but it still gets at the heart of one of my arguments, many developers don't know how to use a database thus will go to any means to circumvent it. Embrace it I say.Mr. V:
Ultimately for me, it comes down to simplicity. I can write an application using PL/SQL that will scale very well for 90% of the solutions. Of course that doesn't include the "fancy" javascript/Ajax stuff needed for the UI. In my ever so humble opinion, this is a much better solution for a business in the long run.
1. You're more likely to change the middle tier than the database tier. Java, asp, Ruby, what's next?
2. Fewer moving parts means lower cost of ownership, even with the CPU costs. One person, one, was able to build and maintain a 350 page website. Be hardpressed to do that with the more expressive languages.
I think all of us are purists and very passionate about what we do. It's probably the main reason we get along. I thoroughly enjoy these conversations because it does force me to think...and that's always a good thing.
Haaa chet.And that's where it ended. I must say it's always fun. Mr. M and Mr. V are both very smart individuals and I highly respect what they do. We have different perspectives...but I think they listen, if only a little, as I listen to them. Their voices creep up on me especially now...which is definitely a good thing.
You sorta made my point than diverged away. Building an app in all PL/SQL is dangerous. It's no safer than me building an app in all Java. I can build very limited app in all Java. The moment I need to interact with other domain (UI, data, low-level native code, etc), I have to switch to something that is closer to what I am trying to do. If I need to create UI, I will pick a ui-centric environment, when I need to talk to DB, I will pass connect to a db and send in my SQL, and so forth. I will use Java as the glue to bring it all togher.
In the end, we may be saying the same thing, but using different accent. O well.
Monday, December 8, 2008
PL/SQL Collections: FIRST and LAST
I kept getting an ORA-06052: PL/SQL: numeric or value error when I ran it. Initially I thought that the wrong value was being passed to the procedure. So I littered the code with DBMS_OUTPUT.PUT_LINE to find out where exactly it was failing when I noticed that it would stop just before looping through a collection.
So I put a:
DBMS_OUTPUT.PUT_LINE( 'collection count: ' || l_collection.COUNT );just before the loop. When I ran it, it showed a count of 0, hmmmm....
The loop used the FIRST/LAST syntax like this:
FOR i IN l_collection.FIRST..l_collection.LAST LOOPI wonder if the fact that there aren't any elements have something to do with it?
So I rewrote it to use:
FOR i IN 1..l_collection.COUNT LOOPLoaded it back into the database and voila! It worked.
I then had to let the original developer know so I wrote up a simple test case for it.
CREATE TABLE tRun it and you get the error:
(
id NUMBER,
something_else VARCHAR2(20)
);
DECLARE
TYPE myrecord IS TABLE OF T%ROWTYPE;
l_collection MYRECORD;
BEGIN
SELECT *
BULK COLLECT INTO l_collection
FROM t;
FOR i IN l_collection.FIRST..l_collection.LAST LOOP
NULL;
END LOOP;
END;
/
DECLAREChanged it to 1..COUNT
*
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error
ORA-06512: at line 9
1 DECLARE
2 TYPE myrecord IS TABLE OF T%ROWTYPE;
3 l_collection MYRECORD;
4 BEGIN
5 SELECT *
6 BULK COLLECT INTO l_collection
7 FROM t;
8 FOR i IN 1..l_collection.COUNT LOOP
9 NULL;
10 END LOOP;
11* END;
12 /
PL/SQL procedure successfully completed.
Tuesday, December 2, 2008
Index those Foreign Keys
I'm helping out doing a small piece on another project. Trying to get back into the habit of "good" unit-testing, I have created some test data. One build script that runs the whole thing, a few user scripts and finally a teardown script that deletes all the data I have created.
Naturally, I run it via SQL*Plus; turning feedback and echo off and using a liberal number of PROMPT directives (very similar to how the ApEx install goes coincidentally). This is what my teardown script reports:
...deleting from child table 1Nothing fancy. Then I realized on the final DELETE it was hanging up. Any locked objects? Nope. Maybe it's the trigger? I checked, only saw ON INSERT OR UPDATE, confirmed that with another developer. He suggested unindexed foreign keys. Huh? Really? That's impossible you say.
...deleting from child table 2
...deleting from child table 3
...etc, etc
It wasn't impossible. It was true. So I ran my copy of the Tom's "find unindexed foreign keys."
Quite a few "****" which is not good.
So I went searching for them myself and came up with this little query (for my specifc table):
SELECTThat gave me a list of all the columns referencing the primary key (some 37 different tables). If index_name was NULL, then I knew I had to add one. Since I have this obsession lately with generating code, I just used this query and added the following:
b.table_owner,
b.table_name,
b.constraint_name,
b.column_name,
a.index_name
FROM
dba_ind_columns a,
(
SELECT
a.owner table_owner,
a.table_name,
a.constraint_name,
b.column_name
FROM
dba_constraints a,
dba_cons_columns b
WHERE a.constraint_name = b.constraint_name
AND a.owner = b.owner
AND a.table_name = b.table_name
AND a.r_constraint_name = ( SELECT constraint_name
FROM DBA_CONSTRAINTS
WHERE TABLE_NAME = 'MY_TABLE'
AND owner = 'SCHEMA_NAME'
AND constraint_type = 'P' )
) b
WHERE b.table_owner = a.table_owner (+)
AND b.column_name = a.column_name (+)
AND b.table_name = a.table_name (+)
ORDER BY
b.table_name,
b.constraint_name
( CASENow I have my CREATE INDEX statements and all is well.
WHEN a.index_name IS NULL THEN
'CREATE INDEX schema_name.' || b.table_name || '_' || b.column_name || '_idx
ON ' || b.table_owner || '.' || b.table_name || '( ' || b.column_name || ' )
TABLESPACE my_tablespace;'
END ) sql_text
I run the teardown script again and it finishes like it should.
Wednesday, November 26, 2008
FLASHBACK DATABASE
Typically, I've taken an export of the database objects at the point where I want to restore them, do my testing, drop all the users, recreate them and import the dump file back in to "restore."
A colleague of mine sent me a link about GUARANTEE FLASHBACK DATABASE. Pretty cool.
Fired up the local instance and began.
SELECT log_mode, flashback_onOK, I cheated and set this all up. My sandbox is typically set in NOARCHIVELOG mode and I had to figure out how to turn flashback on.
FROM v$database;
LOG_MODE FLASHBACK_ON
------------ ------------------
ARCHIVELOG YES
Now to create a restore point.
CREATE RESTORE POINT my_restore_point;There are 2 types of restore points, Normal (the default) and Guarantee. Normal will age out of the control file after a set amount of time. Guarantee must be explicitly dropped. Oracle has the capability to store thousands of restore points.
So I mocked up a small example:
SYS@ELEVEN>SYS@ELEVEN>SYS@ELEVEN>DESC CJUSTICE.DEPTVoila!
Name Null? Type
-------------------------- -------- ---------------------
DEPTNO NOT NULL NUMBER(2)
DNAME VARCHAR2(14)
LOC VARCHAR2(13)
SYS@ELEVEN>ALTER TABLE CJUSTICE.DEPT ADD ( SOME_COLUMN NUMBER );
Table altered.
Elapsed: 00:00:00.87
SYS@ELEVEN>DESC CJUSTICE.DEPT
Name Null? Type
-------------------------- -------- ---------------
DEPTNO NOT NULL NUMBER(2)
DNAME VARCHAR2(14)
LOC VARCHAR2(13)
SOME_COLUMN NUMBER
SYS@ELEVEN>ALTER DATABASE CLOSE;
Database altered.
Elapsed: 00:00:05.57
SYS@ELEVEN>FLASHBACK DATABASE TO RESTORE POINT my_restore_point;
Flashback complete.
Elapsed: 00:00:05.84
SYS@ELEVEN>SHUTDOWN;
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.
SYS@ELEVEN>STARTUP;
ORACLE instance started.
Total System Global Area 535662592 bytes
Fixed Size 1334380 bytes
Variable Size 327156628 bytes
Database Buffers 201326592 bytes
Redo Buffers 5844992 bytes
Database mounted.
ORA-01589: must use RESETLOGS or NORESETLOGS option for database open
SYS@ELEVEN>ALTER DATABASE OPEN RESETLOGS;
Database altered.
Elapsed: 00:00:25.18
SYS@ELEVEN>DESC CJUSTICE.DEPT
Name Null? Type
------------------------ -------- -------------------------------
DEPTNO NOT NULL NUMBER(2)
DNAME VARCHAR2(14)
LOC VARCHAR2(13)
Pretty amazing me thinks.
I still don't know if you can do this without shutting down the database. But it suits my purposes for the time being.
I'm constantly amazed at how much I don't know about Oracle.
Sunday, November 23, 2008
A Java/PL/SQL Port Scanner II
From that I learned a valuable lesson in infinite loops and also that you could DELETE from dba_jobs.
Since I was too lazy to diagnose the TNS errors I was receiving from using UTL_TCP and I knew that the Java class was working, I decided to combine the 2 approaches on Friday night.
There are 5 cable boxes in the house which means 5 IPs to check. Looking more closely at the MAC addresses, I realize 4 are the same (or roughly the same). That led me to believe that the 5th was the DVR. But I'm going to check them all for fun.
Start with the table to capture those open ports:
CREATE TABLE open_portsThen I take the java class (with the Main procedure) and convert it into a standard Java class that can be loaded into the database:
(
ip VARCHAR2(20),
port NUMBER(5,0)
);
CREATE OR REPLACE AND COMPILEThen I create a PL/SQL wrapper around the Java class:
JAVA SOURCE NAMED "CheckPorts"
AS
import java.io.IOException;
import java.net.Socket;
import java.net.UnknownHostException;
public class CheckPorts
{
public static String CheckPorts( String ip, String port ) throws UnknownHostException, IOException
{
try
{
Socket socket = new Socket( ip, Integer.parseInt( port ) );
return "Y";
}
catch ( UnknownHostException e ){ return "N"; }
catch ( IOException e ) { return "N"; }
finally { }
}
}
/
CREATE OR REPLACEThen I wrap that call up in a standalone procedure (remember, this is just for fun).
FUNCTION check_port( p_ip IN VARCHAR2, p_port IN VARCHAR2 ) RETURN VARCHAR2
AS LANGUAGE JAVA
NAME 'CheckPorts.CheckPorts( java.lang.String, java.lang.String ) return String';
/
show errors
CREATE OR REPLACENo, I didn't re-raise after the WHEN OTHERS, had I been doing this seriously, I would have trapped it or better yet, would have caught the specific exception.
PROCEDURE check_port_insert
( p_ip IN VARCHAR2,
p_port IN INTEGER )
IS
l_port VARCHAR2(1);
BEGIN
l_port := check_port( p_ip, TO_CHAR( p_port ) );
IF l_port = 'Y' THEN
INSERT INTO open_ports( ip, port )
VALUES( p_ip, p_port );
END IF;
EXCEPTION
WHEN others THEN
NULL;
END check_port_insert;
/
Finally my anonymous block to call check_port_insert and submit as a job (for threading). I picked 30 processes at a time so as not to overload my laptop...on the first go anyway.
DECLAREA table of IP addresses to check each of the 5 known IPs. I used 10,000 as the number of ports to check (I think it goes higher...again, just being a lot lazy). Then a check against USER_JOBS to see if there were 30 jobs or not, if so, wait 5 seconds (DBMS_LOCK.SLEEP) otherwise submit another job.
TYPE r_record IS TABLE OF OPEN_PORTS.IP%TYPE;
l_table R_RECORD := R_RECORD();
job_count PLS_INTEGER;
job_number PLS_INTEGER;
BEGIN
l_table.EXTEND;
l_table(1) := '192.168.1.102';
l_table.EXTEND;
l_table(2) := '192.168.1.100';
l_table.EXTEND;
l_table(3) := '192.168.1.101';
l_table.EXTEND;
l_table(4) := '192.168.1.103';
l_table.EXTEND;
l_table(5) := '192.168.1.104';
<>
FOR i IN 1..l_table.COUNT LOOP
<>
FOR j IN 1..10000 LOOP
SELECT COUNT(*)
INTO job_count
FROM user_jobs;
IF job_count < 30 THEN
dbms_job.submit
( job => job_number,
what => 'BEGIN ' ||
' check_port_insert( ''' || l_table(i) || ''',' || j || '); ' ||
' COMMIT; ' ||
'END;',
next_date => SYSDATE );
COMMIT;
ELSE
dbms_lock.sleep( 5 );
END IF;
END LOOP;
END LOOP;
END;
/
I probably could have sped things up and not brought down my system by submitting 100 jobs...I'll probably try that tonight.
So here are the results:
PL/SQL procedure successfully completed.50K ports in 2 hours and 33 minutes. Is that good? I never let the free port scanner I downloaded go this far so I have no idea. 2 ports open out of 50K. Not a lot. Then I used Telnet to connect to those ports:
Elapsed: 02:33:18.98
CJUSTICE@ELEVEN>SELECT * FROM open_ports;
IP PORT
-------------------- ----------
192.168.1.102 7501
192.168.1.102 8082
2 rows selected.
Microsoft Telnet> o 192.168.1.102 7501No go. How about 8082?
Connecting To 192.168.1.102...
Microsoft Telnet> o 192.168.1.102 8082OK, so that's probably the port used to pull down the schedule/user guide.
Connecting To 192.168.1.102...
HTTP/1.1 200 OK
Server: gSOAP/2.7
Content-Type: text/xml; charset=utf-8
Content-Length: 0
Connection: close
Connection to host lost.
Press any key to continue...
Anyway, I got to try something new and learn a little bit about ports/sockets. I'm not done yet. Next I need to figure out how many possible ports there are...well, this tells me. So I guess I'll have to expand my search. At least now I can concentrate on the DVR (.102) but I still have to check out about 50K ports it looks like.
Thursday, November 20, 2008
Things I've Learned This Week
I also learned how to reset a sequence without dropping and recreating it. This was courtesy of my crazy DBA, oraclue.
Example:
SQL> CREATE SEQUENCE TEST_SEQ
2 START WITH 10
3 INCREMENT BY 10
4 MINVALUE -1000;
Sequence created.
SQL> SELECT test_seq.nextval FROM dual CONNECT BY LEVEL <= 10;
NEXTVAL
----------
10
20
30
40
50
60
70
80
90
100
10 rows selected.
SQL> SELECT * FROM user_sequences WHERE sequence_name = 'TEST_SEQ';
SEQUENCE MIN_VALUE MAX_VALUE INCREMENT_BY C O CACHE_SIZE LAST_NUMBER
-------- ---------- ---------- ------------ - - ---------- -----------
TEST_SEQ -1000 1.0000E+27 10 N N 20 210
SQL> ALTER SEQUENCE TEST_SEQ INCREMENT BY -10;
Sequence altered.
SQL> SELECT * FROM user_sequences WHERE sequence_name = 'TEST_SEQ';
SEQUENCE MIN_VALUE MAX_VALUE INCREMENT_BY C O CACHE_SIZE LAST_NUMBER
-------- ---------- ---------- ------------ - - ---------- -----------
TEST_SEQ -1000 1.0000E+27 -10 N N 20 90
SQL> SELECT test_seq.nextval FROM dual CONNECT BY LEVEL <= 10;
NEXTVAL
----------
90
80
70
60
50
40
30
20
10
0
10 rows selected.
SQL> SELECT * FROM user_sequences WHERE sequence_name = 'TEST_SEQ';
SEQUENCE MIN_VALUE MAX_VALUE INCREMENT_BY C O CACHE_SIZE LAST_NUMBER
-------- ---------- ---------- ------------ - - ---------- -----------
TEST_SEQ -1000 1.0000E+27 -10 N N 20 -110
SQL> ALTER SEQUENCE TEST_SEQ INCREMENT BY 10;
Sequence altered.
SQL> SELECT * FROM user_sequences WHERE sequence_name = 'TEST_SEQ';
SEQUENCE MIN_VALUE MAX_VALUE INCREMENT_BY C O CACHE_SIZE LAST_NUMBER
-------- ---------- ---------- ------------ - - ---------- -----------
TEST_SEQ -1000 1.0000E+27 10 N N 20 10
SQL> SELECT test_seq.nextval FROM dual CONNECT BY LEVEL <= 10;
NEXTVAL
----------
10
20
30
40
50
60
70
80
90
100
10 rows selected.
Wednesday, November 19, 2008
Debugging ApEx with FOP
Anyway, he's off galavanting in Europe right now and I offered to help support his application is his absence (slacker).
Of course he leaves me with a whopper of a problem...the PDFs won't generate correctly.
I've been out of the daily ApEx world for going on 2 years now. I try to keep up by reading blogs and such, but there's no replacement for doing it every day.
So we've got the latest version of ApEx (3.1.2) running on Oracle XE (dev environment) to be deployed on a 10.2.0.3 Enterprise Edition Oracle Server running on Windows. At least I understand Windows.
Dan is using FOP to render the PDFs. I have no idea what it means, only that it is Java based. Do I put it in the database? No. It runs from the OC4J server. OK, I have a little experience with that...but it was mostly command line, at some point they (being Oracle) finally made a nice and pretty front end (where the hell was that before?).
Back to ApEx. Interactive Reports. Check the box for PDFs (I'm skipping the whole setting it up through the administrative interface on purpose, but rest-assured, it was set up). I open the page, click on the Interactive Reports thingy, select PDF and Open. Invalid File Type. What? Tried saving it to file, same error.
Maybe this latest greatest Adobe Reader (9 something) can't read...but their latest greatest should be able to read any pdf created by any version right? I'll just open up the file in Notepad. Big clue...right at the top:
ORA-29273: HTTP request failedAwesome! It's an Oracle error...I can fix that (hopefully).
ORA-06512: at "SYS.UTL_HTTP", line 1186
ORA-12570: TNS:packet reader failure
I went through the forums which let me here. Didn't really help me much. But I did realize the OC4J server wasn't running. Barnacles.
Started it through the provided .bat file. Go through the steps again, still invalid. But I have a new error in the file:
500 Internal Server ErrorCloser.
OracleJSP:
An error occurred. Consult your application/system administrator for support.
Programmers should consider setting the init-param debug_mode to "true" to see the
complete exception message.
This is where my old command line skills came in handy. .bat file looks like this:
set ORACLE_HOME=C:\OAS\oc4j_101340I know where the oc4j file is now, let's start it from the command line:
set JAVA_HOME=C:\Program Files\Java\jre1.6.0_07\
cd C:\OAS\oc4j_101340\bin
oc4j -start
c:\java -jar c:\oas\oc4j_101340\j2ee\home\oc4j.jarRun the PDF again, check console to see what happened. An error! Some sort of jsp exception which of course I don't have and can't repeat now.
08/11/19 21:57:01 Oracle Containers for J2EE 10g (10.1.3.4.0) initialized
So I add -Dbuild.debug=true to the command. Right in the middle of it I see
!!!!!cannot find javac compiler!!!!!!If ever a clue there was.
Which java is it using? The JRE? Of course there's no compiler, that's a runtime environment. Where's JDeveloper? It has a compiler. c:\jdeveloper\jdk Plug it in a voila! It works.
So apparently it has to be compiled on first use (it's been awhile since I've used J2EE), and that's the reason I can't (or won't) recreate the problem.
A small lesson learned in...something.
Tuesday, November 18, 2008
A Java PL/SQL Port Scanner
We have the multi-room DVR setup, which means we have one DVR but we can view those movies from any TV (with a cable box). Naturally all these boxes are on my internal network (router provided by Verizon).
I understand that with some boxes you can go out and buy additional storage, but I wanted a DIY solution. So, if I could find an open port on one of these boxes, then perhaps I could figure out a way to get the movies onto my computer.
Java to the rescue. Well, not really, but I have always wanted to learn more about that kind of lower level programming (sockets/ports/TCP/whatever). See, I don't really know what I'm talking about which is why I did it.
Since I use JDeveloper for writing SQL and PL/SQL, creating a java class (with the Main method) is easy.
import java.io.IOException;As LC would say, "That's not so bad."
import java.net.Socket;
import java.net.UnknownHostException;
import java.util.Date;
public class Class1
{
private static void p ( String text )
{
System.out.println( text );
}
public static void main(String[] args) throws UnknownHostException, IOException
{
Date startDate = new Date();
String machine = "192.168.1.102";
Thread thread = new Thread();
Socket socket;
int j = 0;
int x = 0;
p( "start date/time: " + startDate.toString() );
for ( int i = 1000; i < 10000; i++ )
{
x++;
p( "checking port: " + machine + ":" + i );
try
{
socket = new Socket( machine, i );
p( "Port open: " + machine + ":" + i );
j++;
}
catch (UnknownHostException e) { }
catch (IOException e) { }
finally
{
}
}
p( "ports checked: " + x );
p( "ports open: " + j );
p( "start date/time: " + startDate.toString() );
p( "end date/time: " + new Date().toString() );
}
}
Only it was slow, 1 port at a time. I did learn that trying to open up a port connection is s l o w. How can I speed this up?
I could make it threaded right? No. That would take too much time.
But hey, I know how to do it in PL/SQL (threaded that is). UTL_TCP should work nicely. I've never had to use it, so a great opportunity to familiarize myself with it (not that it's needed very often).
So I start Oracle XE and initially just try it out to see how it works:
DECLAREEasy enough. Now let's start use DBMS_JOB to "thread" this.
c UTL_TCP.CONNECTION;
BEGIN
c := utl_tcp.open_connection
( remote_host => '192.168.1.2',
remote_port => 1,
tx_timeout => 1 );
utl_tcp.close_connection( c );
EXCEPTION
WHEN others THEN
utl_tcp.close_connection( c );
END check_port;
/
DECLARESo it's running, and running, and running...I opened up another session and did a count on user_jobs. 10000. 12000. 20000. Uh, I think there's something wrong here.
TYPE r_record IS TABLE OF OPEN_PORTS.IP%TYPE INDEX BY BINARY_INTEGER;
l_table R_RECORD;
job_count PLS_INTEGER;
job_number PLS_INTEGER;
BEGIN
l_table(1) := '192.168.1.102';
FOR i IN 1..l_table.COUNT LOOP
FOR j IN 1..10000 LOOP
SELECT COUNT(*)
INTO job_count
FROM user_jobs;
WHILE job_count < 20 LOOP
dbms_job.submit
( job => job_number,
what => 'BEGIN check_port( ''' || l_table(i) || ''',' || j || '); COMMIT; END;' );
COMMIT;
END LOOP;
END LOOP;
END LOOP;
END;
/
I finally kill the session and there are 36,446 jobs in the queue. All checking the same port.
Then I can't even issue a simple SELECT statement. Maximum number of sessions reached or some such rubbish. Just a reminder, it was Saturday night and I was just trying to see if I could do this quickly.
Since I couldn't connect, I tried bouncing the database (a loud gasp from the DBAs out there). Come on! It's a local XE instance...there's nothing important there.
Start it back up, connect as SYS and still can't issue any queries. I do notice as I hit the / sign and the Enter key a bunch of times that once in a while it will let me do a query. So I create a quick PL/SQL block to remove all the jobs from the queue.
BEGINI go through the / + Enter key about 50 times until it finally runs. Success!
FOR i IN ( SELECT job FROM user_jobs ) LOOP
dbms_job.remove( i.job );
END LOOP;
END;
/
Or so I think. Still receiving the max sessions error. There are still 36,446 jobs in user_jobs. I can't remove them anymore as I get "job doesn't exist" when I try to dbms_job.remove it again. Hmmm...How about?
DELETE FROM dba_jobs;What?
36,446 rows deleted.
How can that be? I didn't expect that to work at all...but it did. I have yet to look up the particulars of my actions (deleting from dba_jobs) but it did the job.
And yes, I'm still planning on writing this (correctly) in the near future...I want more movies!
Tuesday, November 4, 2008
My Virtual [Column] Failure
We're trying to create a new process that will detect fraud. Much of the data is sensitive in nature ('natch, thanks Jake).
Step 1: Create a encrypted tablespace
CREATE TABLESPACE encrypted_data
DATAFILE 'c:\oracle\oradata\eleven\encrypted_data_001.dbf'
ENCRYPTION USING '3DES168'
DEFAULT STORAGE (ENCRYPT);
Note, I did not do that as our production DBA won't let me near the physical machine (thankfully).
OK, an encrypted tablespace. What else can we do to protect the data? Encrypt it using DBMS_CRYPTO!
Step 2: Create the encryption/decryption routine. For added security, put it in the SYS schema. I decided on that because even if you have EXECUTE ANY PROCEDURE, you can't execute procedures owned by SYS (without being SYS of course). I'll skip the example as it's linked above.
This has to be fast, really fast. How can I index?
Well, I could index the RAW column, encrypt the incoming text/number and then compare...but we need the ability to do partial matches.
Virtual Columns to the rescue! It's not that exciting really, but a good opportunity to put them to use.
So here's the definition of the table:
CREATE TABLE t
(
id NUMBER(10)
CONSTRAINT pk_id PRIMARY KEY,
name_e RAW(256),
name AS ( CAST( sys.e_pkg.decrypt( name_e ) AS VARCHAR2(20) ) ) VIRTUAL,
dob_e RAW(256),
dob AS ( CAST( sys.e_pkg.decrypt( name_e ) AS VARCHAR2(11) ) ) VIRTUAL
);
Instead of doing the decryption in a view I can do it in the table and the best part of all is that you can index those virtual columns.
CREATE INDEX name_idx ON t( name );
CREATE INDEX ssn_idx ON t( ssn );
I loaded some records into the table and began testing. Wall clock results weren't too bad, less than a second for searches (~1 million records) on SSN.
This is where the problems began.
The actual table has some 60 columns. Since I'm lazy, I created a procedure with an input parameter of T%ROWTYPE. You can use the CAST columns (VARCHAR2) to hold the incoming values, encrypt them and put them in their proper column.
I initially had difficulties using the %ROWTYPE so I went back to named parameters (p_name, p_ssn, etc). What I didn't realize is that the length of the values were no longer constrained. Running some tests I managed to successfully insert some data. Much of it longer than the CAST would allow. When performing a SELECT on the table, Oracle gave me the wonderful
ORA-06502: PL/SQL: numeric or value error: character string buffer too small.After a bit of searching, I found the offending records and tried to DELETE them:
CJUSTICE@SANDBOX>DELETE FROM t WHERE id = 10;
DELETE FROM t WHERE decisionid = 10
*
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error: character string buffer too small
ORA-06512: at "SYS.E_PKG", line 42
What? I'm just trying to DELETE a row...not SELECT from it.
So I leave this to the internals experts out there. Why does Oracle attempt to decrypt the column on a DELETE?
Monday, October 27, 2008
Carl Backstrom
I began my search of some of the Las Vegas papers but was unable to confirm. Then Justin Kestelyn and Eddie Awad confirmed it.
I never had the pleasure of meeting Carl in person.
I met him first through the Oracle ApEx forums more than 3 years ago. At the time I was a newbie but his responses (to me and others) were always patient and well thought out. I never once saw him condescend in the forums. He was a tremendous resource to the ApEx community and he will be sorely missed.
Of late he left comments on my blog or we went back and forth on Twitter. We've even traded a few emails here and there. I only new him through the 'tubes, but this has affected me more than I thought it would.
I had/have a great amount of respect for him.
My condolences go out to his friends and family.
Tuesday, October 21, 2008
JDeveloper, SQL Developer, Subversion and Tortoise
Very annoying because I like both tools.
So I went searching and found this document (online) in the help files. So I tried to put that DLL in the Tortoise bin directory, but of course that wouldn't work because they've renamed everything with a leading "T."
So I downloaded the actual SVN client from Subversion. Then put the dll in that directory...now I open up SQLDev and get this seay32.dll (or something like that) missing. Awesome.
Back to searching and I found this one. user594768 has only 2 posts but this one did the trick.
First, download the 1.2.0 SVNKit from here.
Then, do the following:
copy
- jna.jar
- svnkit-javahl.jar (rename into svnjavahl.jar)
- svnkit.jar
- svnkitsrc.zip
- trilead.jar
into your
Start SQL Developer and go to Tools --> Preferences --> Versioning and it should read:
Subversion Client
SVN/1.5.2 SVNKit/1.2.0 (http://svnkit.com/) r4949
This works for both JDeveloper and SQL Developer.
Saturday, October 18, 2008
Oracle and The Register
Oracle shareholders choke on Ellison's package
Oracle shareholders enhance Ellison's package
It seems Gavin Clark over at The Register has a pretty good sense of humor.
Wednesday, October 8, 2008
%TYPE, What's the Point?
So after a few years of using them I have something of an opinion on them.
Pros
- Strongly typed declarations
- Inheritance - If the column data type changes, you don't have to change any of your packaged code (not really sure if that is different than #1)
Cons
- Difficult to debug - What data type was APPLICATION_DETAIL.FOOID or worse, what was SCHEMANAME.APPLICATION_DETAIL.FOOID? Is it a NUMBER, VARCHAR2, or something else? Off to SQL Developer or SQL*Plus to do a describe on the table...I once spent a full day trying to figure out which of the 30 passed in parameters (and their values) was throwing a data type error. Another developer finally found it.
- Too much typing - I love to type. Seriously. I'm going to say it...this is too much typing.
- It's Ugly - Alright, that's not really a con is it? I like my code pretty. Many times using the SCHEMANAME.APPLICATION_DETAIL.FOOID%TYPE takes me over 90 characters wide...it's just ugly.
In a development situation or trying to spec something out, I can definitely see the value. Nothing is set in stone and needs to be somewhat fluid. But in a stable production environment? Is it really necessary? If you are going to change the data type or precision of a column, wouldn't you expect to make a few changes? For some reason I think of the need to change your Primary Key, and all references...
%TYPE has it's merits. But I think the love affair is over for me.
Thursday, October 2, 2008
Monday, September 29, 2008
Database Design: Things to Consider
By no means is the list exhaustive, just some of the things I think of when planning out my design.
What kind of data? If you’re going to be storing BLOBs of any sort, you may need a lot of space. Work with your DBA to figure out a plan.
How much data? Are you a telecom doing thousands of transactions per second? A non-profit doing 100 transactions a day? Somewhere in between?
Who are your customers? Will this be public facing? Internal only?
How will you handle security? Database Authentication? Roll your own table based authentication? If it’s a web based application, it’s probably easier to roll your own as it’s a stateless environment. If you’re doing a client/server application, Database Authentication is probably the best way to go as you won’t have to reinvent the wheel.
Can you use VPD or Application Contexts?
What type of keys? Will you stick to one method and use surrogate keys for everything? Natural keys for everything? Or a mix of both?
What version of Oracle?
Encryption?Will you need columnar encryption (SSN, Credit Card Numbers, etc)? Transparent Data Encryption?
Coming from the end-user world, I tend to start with reports. What kind of information do you need to retrieve from your data? I think that is the most important thing actually. For without good information, what's the point?
Anyway, feel free to add your own. What are some of things you consider whilst designing your database?
Friday, September 26, 2008
Job: Oracle DBA in Tampa
DESCRIPTION:
The Database Analyst provides database administration support for production, development, and test environments to include day-to-day monitoring and maintenance, problem investigation and resolution, backup creation and maintenance, SQL and database optimization and tuning as well as researching, analyzing, and recommending patches, upgrades, and new technologies surrounding all aspects of data management using relational databases.
RESPONSIBILITIES:
* Provide support for all Oracle database environments to include problem identification, reporting, tracking, analysis, and resolution.
* Research and analyze all assigned anomalies - record, track and log all findings and recommendations.
* Participate in production on-call rotation (one week at a time of 7x24 on-call support for production environments).
* Monitor database performance and space utilization – investigate and resolve reported issues.
* Establish, maintain, and monitor database backups and DRP sites.
* Establish, maintain, and monitor database security, roles, user-role assignments, and individual logins per company security policies and procedures.
* Establish and maintain data replication as required for each application.
* Assist project teams in setting up and testing development, test, and production environments.
* Research, analyze, and recommend patches, tools, and/or database upgrades to resolve issues and add features/functionality.
* Architect, design, and implement replication and/or data synchronization.
* Assist project teams in designing efficient data access methods and procedures, including data structures (physical data modeling), SQL techniques, and use of database tools/utilities.
* Provide technical leadership to the DBA team by assisting junior team members and sharing knowledge and research of tools/techniques, and Oracle features.
* Serve as team liaison to project managers for database administration and system administration services.
REQUIREMENTS:
* 7 years Oracle database administration experience to include versions 9I and 10G.
* 7 years SQL experience with expert-level skills in the SQL language.
* 3 years near-real time replication experience - any tool (streams, golden gate, etc).
* 1 year RAC or ASM experience.
* RMAN/Legato backup experience.
* 4 year BS or BA degree required in a computer related field or 6 years of directly related work experience in a large-scale IT environment.
PREFERRED EXPERIENCE:
* PL/SQL development experience.
* Oracle advanced replication, streams, or Golden Gate replication.
* Oracle Application Server (not Oracle Applications 11i).
* Experience in Oracle RAC (10G).
* Experience in Oracle Advanced Security & Data Encryption.
* Transparent application failover.
Wednesday, September 24, 2008
More on Oracle Exadata Storage
See my previous post here full of a few screen shots from Larry's speech.
A great post on most of the stuff I don't understand from Kevin Closson who works more on the datawarehouse side.
Paul Vallee over at the Pythian Group blog ruminates here and here.
Curt Monash talks about it here and here.
Dana Gardner from ZDNet.
Larry Dignan from CNET.com via ZDNet Australia.
Michael Liedtke via AP.
And how could I forget my local Oracle blogger, Mr. Cunningham.
I'm still amazed at how excited I am. It's just a machine right? Hopefully I'll get to use one of those someday...
Oracle Exadata Storage
Press Release
Exadata
Here's a picture of the new beast:
In comparison to Larry:
Some of the characteristics:
And finally, it runs OLTP too!
Tuesday, September 23, 2008
Oracle in the Amazon Cloud
This is pretty old by now...I had every intention of blogging about it yesterday, but I didn't get to the computer last night.
So I'm "watching" Twitter, there's a whole lot of activity from Oracle Open World, and I see an update from oracletechnet (a.k.a. Justin Kestelyn, the OTN Editor-in-Chief) about the new Cloud Computing Center on OTN.
Of particular interest is the Flash demo of provisioning an EC2 instance with Oracle provided virtual machine images (Amazon Machine Images or AMIs).
Licensing information can be found here. The FAQs can be found here.
The one problem that I haven't seen resolved yet with the Cloud setup (hosted offsite) is security. As a financial services company, it would be difficult if not impossible. I did ponder the ability to use the Cloud as a dev and QA environment, as long as everything was masked before hand and you used Transparent Data Encryption (TDE). Also, as the FAQ states, it might be a good place to store your backups.
Sunday, September 14, 2008
Precious Memory?
This weekend, he says to us, "Mom, remember when I peed on you when I was a baby?" He starts laughing..."aaaah, precious memories."
What? "Precious" memories? Where the hell did he get that from?
Tuesday, September 9, 2008
Fun with SQL: Analytics and Heirarchical
Given this data:
ID DIFF_ID START_DAT END_DATE AMOUNTFor each consecutive time period (month) that there is an amount, count how many buckets, up to six.
--------- ---------- --------- --------- ----------
1 4 01-JAN-08 31-JAN-08 40
2 4 01-FEB-08 29-FEB-08 0
3 4 01-MAR-08 31-MAR-08 10
4 4 01-APR-08 30-APR-08 10
5 4 01-MAY-08 31-MAY-08 0
6 1 01-JAN-08 31-JAN-08 10
7 1 01-FEB-08 29-FEB-08 0
8 1 01-MAR-08 31-MAR-08 10
9 1 01-APR-08 30-APR-08 10
10 1 01-MAY-08 31-MAY-08 10
First thought was definitely Analytics. I toiled away on what became a very unwieldy query (took more than one page anyway). A whole bunch of LAGs with the same number of ever increasing CASE statements.
My first obstacle overcome was to filter out those that had a 0 for amount. That left me with:
ID DIFF_ID START_DAT END_DATE AMOUNTIt took a good while to figure that out for some reason.
---------- ---------- --------- --------- ----------
1 4 01-JAN-08 31-JAN-08 40
3 4 01-MAR-08 31-MAR-08 10
4 4 01-APR-08 30-APR-08 10
6 1 01-JAN-08 31-JAN-08 10
8 1 01-MAR-08 31-MAR-08 10
9 1 01-APR-08 30-APR-08 10
10 1 01-MAY-08 31-MAY-08 10
Once I had that figured, I needed to figure out which were consecutive. Frank Zhou is always solving puzzles with SQL and I remembered I had responded to one of his about a year ago. If you get a chance, please take a look at his site...he solves some pretty cool puzzles with SQL using the MODEL clause and analytics.
Anyway, his post, How to find the earliest start date and the latest end date for consecutive transactions in SQL was similar (and my response similar), so I found it to revisit my thinking at the time.
First, I use the LAG function to get the previous row's ID (unique) and call it PREV_ID. I use DIFF_ID in the PARTITION clause (window) and order by END_DATE; then add one to see if the months are consecutive. If that value matches the START_DATE of the current row, it's consecutive and I use LAG again to get the previous row's ID.
SELECTThat produces the following output:
diff_id,
id,
start_date,
end_date,
( CASE
WHEN LAG( end_date ) OVER
( PARTITION BY diff_id
ORDER BY end_date ASC ) + 1 = start_date
THEN
LAG( id ) OVER
( PARTITION BY diff_id
ORDER BY end_date ASC )
ELSE NULL
END ) prev_id,
amount
FROM col_test
WHERE amount > 0
DIFF_ID ID START_DAT END_DATE PREV_ID AMOUNTAs you can see, I have 3 records with the PREV_ID populated.
-------- ---------- --------- --------- ---------- ----------
1 6 01-JAN-08 31-JAN-08 10
1 8 01-MAR-08 31-MAR-08 10
1 9 01-APR-08 30-APR-08 8 10
1 10 01-MAY-08 31-MAY-08 9 10
4 1 01-JAN-08 31-JAN-08 40
4 3 01-MAR-08 31-MAR-08 10
4 4 01-APR-08 30-APR-08 3 10
As I am building it, I realize I keep nesting the queries, so in comes the WITH clause (when I first learned of that it was terribly difficult to search for, I didn't know it was also called subquery factoring clause...).
WITH subMuch better. Note the START WITH and CONNECT BY PRIOR, I created my own heirarchical table to determine another window to PARTITION on (MIN_START_DATE of the consecutive records).
AS
(
SELECT
diff_id,
id,
start_date,
end_date,
( CASE
WHEN LAG( end_date ) OVER
( PARTITION BY diff_id
ORDER BY end_date ASC ) + 1 = start_date
THEN
LAG( id ) OVER
( PARTITION BY diff_id
ORDER BY end_date ASC )
ELSE NULL
END ) prev_id,
amount
FROM col_test
WHERE amount > 0
)
SELECT
diff_id,
id,
start_date,
end_date,
TO_DATE( SUBSTR( SYS_CONNECT_BY_PATH(
TO_CHAR( start_date, 'MMDDYYYY' ), '-' ), 2, 8 ), 'MMDDYYYY' ) min_start_date
FROM sub
START WITH prev_id IS NULL
CONNECT BY PRIOR id = prev_id
DIFF_ID ID START_DAT END_DATE MIN_STARTNow all I have to do is PIVOT the table (I chose not to use the new PIVOT feature) on DIFF_ID and add an analytic COUNT on my new window (MIN_START_DATE).
------- ---------- --------- --------- ---------
1 6 01-JAN-08 31-JAN-08 01-JAN-08
1 8 01-MAR-08 31-MAR-08 01-MAR-08
1 9 01-APR-08 30-APR-08 01-MAR-08
1 10 01-MAY-08 31-MAY-08 01-MAR-08
4 1 01-JAN-08 31-JAN-08 01-JAN-08
4 3 01-MAR-08 31-MAR-08 01-MAR-08
4 4 01-APR-08 30-APR-08 01-MAR-08
WITH subAnd voila!
AS
(
SELECT
diff_id,
id,
start_date,
end_date,
( CASE
WHEN LAG( end_date ) OVER
( PARTITION BY diff_id
ORDER BY end_date ASC ) + 1 = start_date
THEN
LAG( id ) OVER
( PARTITION BY diff_id
ORDER BY end_date ASC )
ELSE NULL
END ) prev_id,
amount
FROM col_test
WHERE amount > 0
)
SELECT
diff_id,
COUNT( CASE WHEN d = 1 THEN 1 ELSE NULL END ) b1,
COUNT( CASE WHEN d = 2 THEN 1 ELSE NULL END ) b2,
COUNT( CASE WHEN d = 3 THEN 1 ELSE NULL END ) b3,
COUNT( CASE WHEN d = 4 THEN 1 ELSE NULL END ) b4,
COUNT( CASE WHEN d = 5 THEN 1 ELSE NULL END ) b5,
COUNT( CASE WHEN d = 6 THEN 1 ELSE NULL END ) b6
FROM
(
SELECT
diff_id,
COUNT( id ) OVER
( PARTITION BY diff_id, SUBSTR(
SYS_CONNECT_BY_PATH(
TO_CHAR( start_date, 'MMDDYYYY' ), '-' ), 2, 8 )
ORDER BY end_date ) d
FROM sub
START WITH prev_id IS NULL
CONNECT BY PRIOR id = prev_id
)
GROUP BY diff_id;
DIFF_ID B1 B2 B3 B4 B5 B6Problem solved!
------- ---- ---- ---- ---- ---- ----
1 2 1 1 0 0 0
4 2 1 0 0 0 0
Table creation and data can be found here.
Friday, September 5, 2008
DBMS_SQL.TO_CURSOR_NUMBER
He wanted to do it in SQL or PL/SQL. Essentially, he wanted to be able to generate type-safe classes (I could look it up but it's Friday and I'm lazy). Also as a beginning to generate code so it can be moved out of the database into the web service layer. It hasn't been completely decided whether that will happen or not...once it is, I will go with the flow (or find a new job if I can't handle it right?).
So, how to do it in PL/SQL? I wasn't sure, but I immediately thought that if it could be done, it would be by using the DBMS_SQL package. So I opened the docs to see what was available to me.
Reading through, I found a procedure, TO_CURSOR_NUMBER. Basically it allows you to accept a ref cursor as an input parameter and convert it to a DBMS_SQL cursor handle. Then you can use all the goodness of DBMS_SQL as you normally would.
I created a short working example (which doesn't differ a whole lot from the example provided). Here's the function returning a ref cursor:
And here's the anonymous block I used to transform the ref cursor into a DBMS_SQL cursor:
CREATE OR REPLACE
FUNCTION get_cursor RETURN SYS_REFCURSOR
IS
c SYS_REFCURSOR;
BEGIN
OPEN c FOR
SELECT
owner,
table_name,
tablespace_name,
status,
logging,
last_analyzed
FROM all_tables
WHERE rownum < 51;
RETURN c;
END get_cursor;
/
show errors
I get tired of typing out DBMS_OUTPUT.PUT_LINE every 2 lines, so if I am going to use it more than once or twice, I use a procedure named "p" to do so.
DECLARE
c SYS_REFCURSOR;
l_cursorid NUMBER;
l_column_count INTEGER;
l_describe_table DBMS_SQL.DESC_TAB;
l_numvar NUMBER;
PROCEDURE p( i_text IN VARCHAR2 )
IS
BEGIN
dbms_output.put_line( i_text );
END p;
BEGIN
c := get_cursor;
l_cursorid := dbms_sql.to_cursor_number( c );
p( 'Cursor ID: ' || l_cursorid );
dbms_sql.describe_columns( l_cursorid, l_column_count, l_describe_table );
p( 'Column Count: ' || l_column_count );
p( 'DESC_TAB Count: ' || l_describe_table.COUNT );
FOR i IN 1..l_describe_table.COUNT LOOP
p( 'Column: ' || l_describe_table(i).col_name );
END LOOP;
dbms_sql.close_cursor( l_cursorid );
END;
/
Here's the output when I run it:
Fairly easy to use. It took about 20 minutes to read the docs and write it up. As you can see from the definition of the DBMS_SQL.DESC_TAB (below), you have access to quite a number of attributes of the cursor.
Cursor ID: 1438299795
Column Count: 6
DESC_TAB Count: 6
Column: OWNER
Column: TABLE_NAME
Column: TABLESPACE_NAME
Column: STATUS
Column: LOGGING
Column: LAST_ANALYZED
I think I found (with the help of my DBA) the conversion of the col_type (which is an integer) to it's text equivalent, but I can't say for certain yet as I haven't tried it.
TYPE desc_rec IS RECORD (
col_type BINARY_INTEGER := 0,
col_max_len BINARY_INTEGER := 0,
col_name VARCHAR2(32) := '',
col_name_len BINARY_INTEGER := 0,
col_schema_name VARCHAR2(32) := '',
col_schema_name_len BINARY_INTEGER := 0,
col_precision BINARY_INTEGER := 0,
col_scale BINARY_INTEGER := 0,
col_charsetid BINARY_INTEGER := 0,
col_charsetform BINARY_INTEGER := 0,
col_null_ok BOOLEAN := TRUE );
Thursday, September 4, 2008
Jobs: Data Architect/Software Developer
So I like helping people out? What's wrong with that? ;) I will receive no money from this...
This is a job I never interviewed for. I did a phone screen with the Recruiter (a Chase Paymentech recruiter), but I received an offer from Revolution Money later that afternoon and accepted.
She found me resume on Monster and when I read it (the font size was like 72!), I thought this was tailor made for me.
Either post a comment (I won't publish it) or email me at chet dot justice at gmail for contact information. Here are the requirements:
Data Architect/Software Developer will work as part of a team to refractor Chase Paymentech's current data structures and design new solutions.
This is a great opportunity for an experienced Oracle Developer who is looking to move into a Data Architect role.
Bachelor's degree with major in Computer Science with seven years of progressive development experience.
Ideal candidate will have experience working in an Oracle environment with OLAP and DSS databases along with strong PL/SQL coding skills.
This position also requires strong data modeling skills and knowledge of Informatica and database release process in order to be successful.
Candidate should be able to demonstrate experience in a mentoring or training role.
I will add that this position is new and so is the "department." It's kind of "start-uppy" within this giant organization. That's something that really appealed to me.
Monday, September 1, 2008
Design: The Entity = The Party
In a recent interview, the prospective employer mentioned the "party" model. I had no idea what they were talking about at the time only that it was similar to my entity model.
In another interview, I was asked about subtyping. I didn't know the vernacular as it pertained to database modeling, but I went on to explain the entity model. He told me they were one and the same! Now I have a name for it and I came to something that others had already "invented." While it would have been easier to read one of the books on modeling that discussed the Party Model, but I can't seem to read technical books (online is a different story for some reason). I also think it's pretty cool I came to the same conclusion as others outside of their influence. I do have to wonder though if I took it in at some point of time but don't explicitly recall it.
Anyway, it's definitely nice to have an idea validated.
Below are some general links on data modeling and specific ones on the Party Model:
Data Modeling on Wikipedia
A Universal Person and Organization Data Model
Siebel/Oracle - Party Data Model
Party Information Framework
Thursday, August 28, 2008
Jobs: ETL Developer
Position Description:
· Develop and implement applications that support the business strategy of measuring and optimizing performance
· Support existing data warehouse applications and users
Mandatory Skills/Experience
· 2+ years experience working on data warehouse/data integration solutions
· 2+ years working with vendor ETL tools to design/build/test (Business Objects Data Integrator XIR2)
· 1+ years working with Oracle database; good PL/SQL knowledge
· Experience with dimensional data models (star schemas)
Preferred Skills/Experience
· Java programming experience
· Data Quality XIR2
· SQL Server 2005/SSIS
· Business Objects Enterprise XIR2
· Crystal Reports XIR2
Wednesday, August 20, 2008
Calculate Total Possible Score for Wii Bowling, Power Throws
I've also been playing the "Training" game of Power Throws (bowling). You are given 10 shots with another row added each frame. You start out with 10, then 15, then 21, etc. If you get all the pins, you get a bonus (total pins * 2). So far, my highest score has been 651.
I've never bothered to figure out what the total possible score is though.
SQL to the rescue.
Using analytics and the CONNECT BY LEVEL option (10g and higher), it's fairly easy:
I had to set the filter on the outer query because you start with 4 rows (10 pins). And the results:
SELECT
pin_row,
pins,
running_total,
rt_with_bonus
FROM
(
SELECT
rownum pin_row,
rownum pins,
SUM( rownum ) OVER ( ORDER BY rownum ) running_total,
( SUM( rownum ) OVER ( ORDER BY rownum ) * 2 ) rt_with_bonus
FROM dual
CONNECT BY LEVEL <= 13
)
WHERE pins >= 4
Voila! 890 is the best possible score if I strike every frame. I did manage to pass my high score by 9 pins tonight as well. It's a great day!
CJUSTICE@XE>BREAK ON REPORT
CJUSTICE@XE>COMPUTE SUM OF RUNNING_TOTAL ON REPOR
CJUSTICE@XE>COMPUTE SUM OF RT_WITH_BONUS ON REPOR
CJUSTICE@XE>/
PIN_ROW PINS RUNNING_TOTAL RT_WITH_BONUS
---------- ---------- ------------- -------------
4 4 10 20
5 5 15 30
6 6 21 42
7 7 28 56
8 8 36 72
9 9 45 90
10 10 55 110
11 11 66 132
12 12 78 156
13 13 91 182
------------- -------------
sum 445 890
So how come no one wants to hire a guy that can do this kind of fun stuff with SQL? ;)
Monday, August 18, 2008
Day 15...
I've had three interviews so far. The first (consisting of two) was for a data architect position with a local company. I was referred via a friend. I received word last week that I did not get it. I had a little hope that they might give me a go, but out philosophical differences were too great. Essentially, I'm a data-centric person, they were more software (MVC) oriented. I don't think the two positions are mutually exclusive, but I couldn't sell them on it. A bit of religiousity there I think.
Second was a phone interview for an APEX position in New York. One, I would have been able to go to New York for the first time and two, I would be working with APEX again (consistently anyway). I got bumped because I don't have experience with web services. The extent of their web services was the authentication. Isn't it just like calling a function or something? Seriously, how hard could that be? Anyway...
Third was today. Went pretty well I think. I'll know more later this week.
My sleeping habits have gone to shit. My son got a Wii for his birthday last month and I stay up until all hours playing. I bowled a perfect game tonight in fact! This is why I haven't played video games since Intellivision.
I'll end on a good thought. Today was the first day of school for both the kids and when I got home, I got a rousing ovation from them (always nice). We've had a great time the past couple of weeks.
So I lay down for a nap...and you know when you go to relax your body so you can sleep? I relaxed my facial muscles (I have a tendency to grind my teeth), I realized I was smiling...very cool.
Thursday, August 14, 2008
Tampa - Employment Resources
If you're ever in the Tampa area, here are some resources you can utilize to find a job.
Consulting Agencies
VeredusTekSystems
Tech USA
Sun Personnel
Rita Technology Services
Stradis
Job Sites
Employ Florida - My father sent this one to me yesterday. Very good.(The obvious ones)
Dice
Monster
Career Builder
If I think of anymore I'll add them here. If you know of any others in the area, leave 'em in the comments.
Wednesday, August 6, 2008
Oracle Jobs in Doral (Miami), FL
Contact me either through comments or email (chet.justice at gmail) and I'll give you the contact information of the recruiter.
1 Oracle Fin Apps 11i Financials Functional need (core modules)
6 Oracle PL/SQL Developer needs w/ FinApps experience
2 Lead PL/SQL Developer needs w/FinApps
4 Oracle Reports 6i Developer needs
2 Lead Oracle Reports 6i Developer needs
2 Oracle Discoverer needs