[m-rev.] for review: simplify profiler feedback code

Paul Bone paul at bone.id.au
Mon Dec 1 11:08:36 AEDT 2014


On Sat, Nov 29, 2014 at 07:31:24PM +1100, Zoltan Somogyi wrote:
> 
> 
> On Sat, 29 Nov 2014 00:23:53 +1100, Paul Bone <paul at bone.id.au> wrote:
> > > > Now that I think about this (specifically for candidate parallel
> > > > conjunctions), I think that I'd prefer to use add_feedback_* and receive an
> > > > error if a feedback file already exists with this information in it, rather
> > > > than silently updating it.
> > > 
> > > OK, I will replace the add_* calls with replace_* calls.
> > 
> > That seems backwards but I suspect that the misunderstanding is mine.  I
> > think I'll understand better when you've commited your code with this
> > replacement and I read them together.
> 
> Yes, I meant the exact opposite of what I wrote. I will replace
> the calls to replace_* with calls to add_*.

Okay, thanks.

> > > 
> > > The second source of the program name is not in the patch; it is supplied
> > > by the caller of the predicate in the patch, and the caller wasn't changed.
> > > 
> > > However, I don't understand your reasoning here. Surely the compatibility
> > > or otherwise of feedback provided by different programs depends on what
> > > those programs are. At the moment, we have just one, so we have no
> > > basis for any concrete argument one way or the other. Wouldn't this be
> > > the kind of thing that is better decided when we get a second kind of feedback?
> > 
> > An error message that says "This feedback file doesn't match the program
> > you're trying to analyse" or similar would be much clearer than some
> > "procedure 263 not found" or similar error.  Detecting this type of error
> > early may prevent confusing the user.
> 
> Ah. It seems that we were talking past each other. The comment on
> read_or_create in mdbcomp/feedback.m said "ProgramName is the name
> of the program that generated this feedback file". I took that to mean
> that it is the name of mdprof_create_feedback, or a similar Mercury tool
> that creates feedback files. Your comment makes it clear to me that it is
> meant to be the name NOT of the program that generated the feedback
> file, but of the program the feedback file was generated FOR, i.e. the
> name of the program whose execution created the Deep.data file
> the feedback was created FROM. That makes much more sense.
> I will fix the misleading comment, and I will look over all the code
> that handles these program names with that in mind.

Yes, that's right.  The comment is misleading and should be fixed.

> That brings to mind a question. Should we change the runtime system
> so that instead of always putting the profiling data in Deep.data and
> Deep.procrep, we put them into prog.Deep.data and prog.Deep.procrep?
> We already put MR_program into the Deep.data file; we could put
> a version of it (stripping out any initial directories, or maybe replacing
> all the slashes in MR_progname with dots) into the name of that file
> as well.

I like the idea of naming the profiling file after the program that
generates it.  But I'd prefer it without the full path, that seems
unnecessary.  We also talked about putting both sets of data in the same
file and compressing the file (with libz).  I think that it makes sense to
do all this together.  Additionally, if users do not want to install /
link-to libz for any reason we sould permit uncompressed profiling data.  We
have this option for libreadline to avoid making people's debugging binaries
GPL.

> > > However, my original question was more basic, though it seems to have been
> > > buried in its original sentence. This was: is the testing of the functionality
> > > of the feedback mechanism during bootcheck thorough enough that
> > > the absence of an abort due to a program name mismatch during a bootcheck
> > > gives you confidence that the abort won't happen in real life either?
> > 
> > No, the testsuite tests sensable use of the feedkback tools.  In practice a
> > user in a real situation could break the code in ways that I can't imagine
> > and may never make sense.
> 
> I didn't think the test suite tested all possible bad uses of feedback files.
> My question is: does it test the correct handling of all the *intended*
> uses of feedback files? Do you remember?

If I remember correctly it generates and then reads a feedback file from
some pre-existing deep profiling data.  It doesn't attempt to update
existing feedback files or anything like that.  I executes all intended and
"supported" uses of feedback files.  In otherwords, it only tests what makes
sense in practical terms with the current implementation.

> > > Thanks for that. In that case, I think there are two reasonable choices
> > > for what we should do with that code:
> > > 
> > > - keep it around in commented out form, as in the patch, but with
> > >   an explanatory/historical comment saying what it is, or
> > > 
> > > - delete even the commented-out form of the code, but add the explanatory/historical
> > >   comment as above, together with a pointer to where someone needs to dig in git
> > >   to find the old code.
> > > 
> > > Any preference? 
> > 
> > I'd prefer to remove the code completely and leave a comment.  IMHO this is
> > one of the reasons to use version control and archive our old builds.  It's
> > also very unlikey that someone will want to test this code.  It only makes
> > sense to test either Jerome's or my older work to attempt to reproduce
> > results in our papers, so it's important that there is a pointer to this old
> > code/data.  However I don't think it's likely that someone will want to do
> > this.
> 
> OK, I agree, and will put that into effect.
> 
> Zoltan.
> 

Thanks.

-- 
Paul Bone



More information about the reviews mailing list