Tuesday, November 13, 2012

Taking the Mystery out of the Question Mark

One of the frequent comments I hear from people goes something like this:  "Your company is really good at customer service.  How do you do it?"

It's pretty simple.  People ask questions, and when they do, they want and deserve an answer.

Many technical people have an interesting way of responding to emails.  It involves picking and choosing which questions they want to answer.  There's a running joke at DBConnect about this trait in IT, and we see this issue rear its ugly head all-too frequently.

At DBConnect, we pride ourselves on providing prompt and complete answers to questions.  Sometimes that requires a lot of hard work.  It also frequently requires follow-up, and more work, because questions often lead to additional questions, and often to additional work.

That's not a bad problem to have in an uncertain economy to be sure.  Happy customers are a good thing.

So, our advice to Information Technology professionals is to pay attention to those question marks.  They really do mean something important; ignoring a question mark can lead to an unhappy customer and the loss of business.


Monday, August 20, 2012

Counting lines in text files

One of the best ways to enhance error checking in the scripts you write as a DBA is to search for various error messages and codes in log files.  A related question that often comes up during error checking is to count the number of lines in a text file.  The method is different depending on whether you are using Unix or Windows.

On Windows, the best way we've found to count the number of lines in a text file is to run this from the command prompt:

findstr /R /N "^" {filename.txt} | find /C ":"

(Hint:  The ^ character represent the start of a new line, so this is the secret to locating each new line.)

On Unix, we use this method:

wc -l {filename.txt}

or

cat {filename.txt} | wc -l

Saturday, August 18, 2012

Wrapping up a busy week!

What a busy week we've had here at DBConnect Solutions!  News, news, and more news to share on some recent happenings.

First off, we are proud to announce the official release of the first offering in our NightOwl suite of products.  The NightOwl™ Health Check for Oracle JDE is an automated report that compares your JDE Oracle database against our set of Best Practices. The health check reviews security, architecture & topology, backup & recovery, and JDE-specific areas, and produces web-based and PDF reports for your review.  Our official press release can be found on our website in our resources section here.  We've received an exceptional initial response and were thrilled to see our release was picked up by the likes of Yahoo Finance, CNBC, and Market Watch.

We were also happy to be involved in the release of Wave System's new Wave Cloud product.  Wave Cloud is a cloud-based service for enterprise-wide management of self-encrypting drives (SEDs).  DBConnect Solutions was selected to take part in the beta testing program during the implementation of our security policies to ensure our client data is protected.  You can find out more about Wave Cloud by reading the official press release here.

Finally, we are excited to announce that DBConnect Solutions will be hosting a booth at the Oracle OpenWorld convention this year from Monday, October 1st through Wednesday, October 3rd.  Our display will be located in the JD Edwards Pavilion on the 3rd floor lobby of InterContinental Hotel, which is right next to the Moscone West Exhibition Hall.  Stop by to learn more about our offerings, including our NightOwl™ products and receive a free gift.  You can contact us at info@dbconnect.com to learn about your chance to receive a free Oracle OpenWorld Discover pass to gain access to exhibits, keynotes and more.  Stay tuned for more info as we get closer to the conference.

These are exciting times at DBConnect and we're just getting started!  Stop back here often to hear our latest news or sign up on our website to receive updates via email.

Friday, August 17, 2012

When good JDE queries go bad

It's 4 a.m. and your cell phone is ringing again.  You groan because you already know why.  Your JDE job queue is backed up again, and batch jobs are running forever.

You just don't understand it.  These little jobs that normally run in under a minute sometimes take as long as an hour to complete.  And worse yet, your users in Europe have started complaining about system performance when running web applications at the same time your job queue gets backed up.  It just doesn't make any sense.  Or does it?

Unfortunately, the above scenario is fairly common when you use an Oracle database with JDE.

Here's an explanation of what can happen to throw off the Oracle optimizer and cause SQL queries to start behaving very badly.  

By default, Oracle statistics are updated in the nightly maintenance window that begins at 10 p.m.  Any tables that have had enough changes since statistics were previously collected will get new stats collected.  Also by default, Oracle will use an internal algorithm to determine when to invalidate cached SQL cursors.  (This setting can be changed to force an immediate invalidation, if you desire -- DBConnect typically does force immediate invalidation by the way.)

Once new statistics are available for a table, and when any cached cursors are invalidated, new SQL plans will be calculated when an SQL statement next references the given table.  This is a key moment in time, because whatever plan the Oracle engine calculates upon the very first execution of the SQL statement, will then be used for all subsequent executions (as long as the SQL statement remains cached).

If, therefore, a BAD execution plan happens to be calculated, the performance will obviously be bad.

But how can this happen?

This can happen because of something called bind peeking.  Bind peeking occurs when Oracle "peeks" into the actual values used within an SQL statement's bind variables, and makes a decision on an execution plan based on these values.  That's usually a very good thing because it means that Oracle is using the ACTUAL values used in the query to provide the best possible result.

But what if the data in the underlying tables is skewed?

If the underlying data is skewed (a very common issue in JDE databases - think of your companies, divisions, doc types, etc.), then Oracle might make a great choice when an SQL statement is first executed, but the path used for the next execution might be a terrible choice.

Let's look at this example on your Sales Order Detail File, F4211:

Let's say you have approximately 1 million records in this table, and the breakdown of your doc types (SDDCTO) is as follows:

S1 - 10 records
S2 - 20,000 records
S3 - 20,000 records
S4 - 10,000 records
S5 - 10,000 records
S6 - 10,000 records
S7 - 20,000 records
S8 - 10,000 records
S9 - 900,000 records

Now, let's say stats are updated on F4211 at 10:00 p.m., and then a UBE runs at say, 11 p.m., which runs an SQL that looks like this:

SELECT * FROM PRODDTA.F4211 WHERE SDDCTO = :1

If the very first execution of the SQL statement specifies a value of S1 for doc type, then it is very likely an index will be used because of a composite index that includes the SDDCTO field.  This is because there are only 10 records with a value of S1, and Oracle knows about this because stats were just collected.

The SQL statement will run very fast.

Now, let's say the UBE continues and now specifies a value of S9 for doc type.  The execution plan is already cached, so it will use the index on SDDCTO.  This means it will need to do an index range scan to locate 900,000 entries, and then take the resulting 900,000 ROWIDs, and go retrieve those records from the table!  This is extremely inefficient, and it would be far more efficient to do a full scan of the F4211 table.

So what can you do at 4 a.m.?

Force dependent SQL statements to be reparsed!

A very easy way to do this is to issue a null comment on the table.  Since this is a DDL statement, it will force all dependent cursors to be invalidated:

COMMENT ON TABLE PRODDTA.F4211 IS '';

The down side is that the currently-running query will have to complete before Oracle will start using a new plan.  Once the statement has completed, though, Oracle will reparse the SQL statement upon its next execution, and you might just find that the batch jobs take right off, and you can go right back to sleep.

Disclaimer:  We don't consider this a FIX to this type of issue, as much as a troubleshooting tactic when you get into a critical situation.  There are strategies for preventing issues like this from ever presenting themselves, and that's what we recommend our customers do when these types of issues occur.





Friday, February 25, 2011

To Truncate or Delete, Is it Really a Question?

One of the primary objectives of an oracle DBA is to protect the data that is stored in the database. The potential for data loss can come from many different threats and an often overlooked one is from a user accidentally deleting data. By implementing proper security mechanisms, the DBA can define the proper access to prevent unauthorized users from having the ability to delete data from tables that shouldn't be accessing. However, there will occasions when a developer will need to delete data for testing purposes or when working with custom tables. Defining this access is fairly straight-forward as well, but there can also be situations when providing additional access can actually be beneficial. If a developer needs to delete all the data in a table, they have two options: they can issue a regular DELETE statement or they can use an Oracle command called the TRUNCATE command. Now most DBAs will get very nervous at the thought of allowing a user to truncate tables because there is more risk involved when this privilege is in the wrong hands. However, consider this scenario we recently encountered:

User A needs to delete data from a custom table and issues "DELETE FROM T;"
The table T in question is 50 GB in size, so behind the scenes what does Oracle have to do for this "simple" statement? The database must keep track of all the rollback information for the records being deleted in the UNDO area in case the user decides to rollback his changes or the statement fails. The modifications to the table generates change records in the database redo logs that Oracle uses to keep track of the sequence of changes made to the database and also copies these files to archive redo logs as they fill up. All of the indexes on table T must be maintained to keep track of the record pointers, which also generates redo log activity.

After running for 2 hours, the delete statement finally failed when it consumed all of the available UNDO space because of the amount of changes being made. The change generated so much activity that it also almost completely used all the available disk space on the server where the archive logs are stored. Once the transaction failed, then Oracle had to reverse the changes to put the data back the way it was. During this rollback process that took an additional 2 hours, Oracle generated additional redo activity from reverting the changes. All of this to delete data that isn't needed anymore and meanwhile the system resources are being consumed, potentially impacting activity that end users are trying to complete.

Now compare this to the effect of issuing a TRUNCATE on the same table. Truncating table T completed in a matter of seconds, consumes no UNDO space, and generates no redo activity. The truncate command also reclaims the space used by the table by resetting the high water mark (a marker Oracle uses to track the used blocks in a table). If the delete command had completed successfully, table T would still be 50 GB in size unless additional maintenance was performed to reclaim the unused free space. Truncate doesn't come without risks, the biggest being that it cannot be undone. When issuing a delete statement, the user has a chance to verify they deleted the intended data before issuing a commit (save) or rollback (undo) statement. With a truncate, there is no opportunity for Oracle to ask the user "Are you sure?" so this must be used with caution.

Careful consideration should be used when allowing access to the TRUNCATE command. The elevated privileges required by the TRUNCATE command will allow the user to truncate any table in the database; however the potential for reduced impact on the system can often outweigh the security risk in non-production databases. Hopefully this example shows how using the features that are available in Oracle can help save your system from unnecessary problems.

Monday, February 21, 2011

Finding PUBLIC Privileges

One of the most common security issues that a DBA will encounter in a JDE database is PUBLIC access to database objects. By default, JDE will grant full access to the Oracle internal PUBLIC role whenever a table is created. What this means is that any Oracle account that is created in the database will inherent full read/write access to all JDE data. This security hole becomes very apparent as soon as there is a requirement for an additional database account (for example an application support analyst or a reporting tool). A DBA would typically setup a role with read-only access to the tables which the user needs to query, however it is easy to overlook the default PUBLIC access.

Here is a query that can be used to list the JDE objects which have PUBLIC access granted on them (this assumes E1 version 9.0, for version 8.11 substitute 900 with 811, etc.):


set pagesize 9999
set linesize 120
set colsep ' '

column owner format a30 heading 'Owner'
column table_name format a30 heading 'Object'
column privilege format a40 heading 'Privilege'

select owner, table_name, privilege
from dba_tab_privs a
where grantee = 'PUBLIC'
and (owner like '%900' or
owner like '%SAVE' or
owner like '%CTL' or
owner like '%DTA')
and not exists (select null from dba_recyclebin b
where b.owner = a.owner
and b.object_name = a.table_name)
order by owner, table_name privilege;

Identifying the objects with PUBLIC access is the first step in addressing the issue, but how can this security gap be closed to protect your data and satisfy security audit requirements? Simply revoking the PUBLIC grants only provides a temporary fix and will cause permissions errors in the JDE application. A proper solution is to implement a robust, maintainable security framework along with automated tools to maintain the proper levels of security in your database. Contact DBConnect Solutions, Inc. for more information on how we can help secure your JDE database.

Saturday, February 5, 2011

What are my batch UBEs doing in the Oracle database?

The Oracle DBA is often asked to find out why a UBE is running so long, or if it's running at all inside the Oracle database.  A good place to start is to locate all UBEs that have database connections, and check what they are doing.  The following simple query can be used to locate the database connections:

column process  heading "Process|Id"            format a10
column sid      heading "Oracle|SID"            format 999999
column event    heading "Wait Event"            format a10 wrap
column machine  heading "Client|Machine"        format a10 wrap
column program  heading "Program"               format a10 wrap
column time     heading "Logon|Time"            format a20

set pagesize 9999

select  process,
        sid, 
        substr(event,1,80) event,
        substr(machine,1,20) machine,
        substr(program,1,30) program,
        to_char(logon_time,'mm/dd/yyyy hh24:mi:ss') time
from    gv$session
where   program like 'runbatch%'
order   by 1,logon_time

Of particular importance is the Process Id, which indicates the operating system process number of the UBE application, and the Wait Event, which indicates what the database session is doing.  These bits of information will help the DBA determine if the UBE is active, and what it might be waiting on inside the database.

The above information can also be connected to the JDE job table to show the name of the UBE that is running, and to reveal exactly when the job was submitted.