[CDBI] An optimization question
Tim.Bunce at pobox.com
Wed Sep 26 09:41:48 BST 2007
On Tue, Sep 25, 2007 at 02:16:12PM -0700, Michael G Schwern wrote:
> Tim Bunce wrote:
> > On Tue, Sep 25, 2007 at 03:16:24PM -0400, Perrin Harkins wrote:
> >> On 9/25/07, Eric W. Bates <ericx at vineyard.net> wrote:
> >>> e.g. somehow 900,000 calls to name_lc seems excessive when the core
> >>> program is handling something on the order of 850 records.
> >> If you sort by wall time with the -r switch, this will most likely
> >> drop off the radar. Database calls take a lot of wall time and very
> >> little CPU time so the default sort from DProf gives very strange
> >> results in database apps.
> > The granularity of the cpu and system time 'ticks' is just too course to
> > get meaningful numbers for most apps, database or otherwise.
> I'd suggest using a combination of dprof and DBI::Profile to get your total
> performance numbers from a CDBI app.
Devel::FastProf is also worth a look.
I'm working (very part time) on some patches to make it support multiple
levels of code resolution (line, *block*, sub, and *file*).
p.s. One of the annoying side effects of multi cpu machines is that the
two cpus don't return the same values of high-res time. So a process
reading the high-res time frequently can see time move backwards.
The DBI tests used to fail if that happened but it was so common I had
to make it a warning instead. The moral of the tale is to take profiles
over long periods to even out this new kind of 'noise' - but still
don't trust figures for code that's not called often.
I'd like to find some 'official' description of this issue and what's
being done to address it. Here's the best search I've found so far:
More information about the ClassDBI