MediaWiki Job queue problem

classic Classic list List threaded Threaded
20 messages Options
Reply | Threaded
Open this post in threaded view
|

MediaWiki Job queue problem

Krabina Bernhard
Dear SMW users,

this might not be a directly SMW related question, but maybe some of you have experience with the job queue and job table?

In the Vienna History wiki, jobs do not get done when running runJobs.php. The Job table shows that the jobs currently in the queue are locked: there are entries in the row "job_token" which results in the jobs not being done by the script.

How can I know why and what locked these jobs, how to unlock them or what else to do?

There have been 12400 jobs sitting around for a week: https://www.wien.gv.at/wiki/api.php?action=query&meta=siteinfo&siprop=statistics&format=jsonfm
Suddenly, without anybody knowing why, ~ 5000 have been done, but > 7000 are still sitting there

showJobs.php always shows "0" and runJobs.php executes without error and without doing anything.

regards,
Bernhard

------------------------------------------------------------------------------
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
_______________________________________________
Semediawiki-user mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/semediawiki-user
Reply | Threaded
Open this post in threaded view
|

Re: [SMW-devel] MediaWiki Job queue problem

James HK
Hi,

As you are running MW 1.22.* [0], you may have an interest in reading [0].

[0] https://www.wien.gv.at/wiki/index.php/Spezial:Version

[1] https://www.mediawiki.org/wiki/Manual:Job_queue#Changes_introduced_in_MediaWiki_1.22

Cheers

On 9/24/14, Krabina Bernhard <[hidden email]> wrote:

> Dear SMW users,
>
> this might not be a directly SMW related question, but maybe some of you
> have experience with the job queue and job table?
>
> In the Vienna History wiki, jobs do not get done when running runJobs.php.
> The Job table shows that the jobs currently in the queue are locked: there
> are entries in the row "job_token" which results in the jobs not being done
> by the script.
>
> How can I know why and what locked these jobs, how to unlock them or what
> else to do?
>
> There have been 12400 jobs sitting around for a week:
> https://www.wien.gv.at/wiki/api.php?action=query&meta=siteinfo&siprop=statistics&format=jsonfm
> Suddenly, without anybody knowing why, ~ 5000 have been done, but > 7000 are
> still sitting there
>
> showJobs.php always shows "0" and runJobs.php executes without error and
> without doing anything.
>
> regards,
> Bernhard
>
> ------------------------------------------------------------------------------
> Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
> Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
> Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
> Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
> http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
> _______________________________________________
> Semediawiki-devel mailing list
> [hidden email]
> https://lists.sourceforge.net/lists/listinfo/semediawiki-devel
>

------------------------------------------------------------------------------
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
_______________________________________________
Semediawiki-user mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/semediawiki-user
Reply | Threaded
Open this post in threaded view
|

Re: [SMW-devel] MediaWiki Job queue problem

Krabina Bernhard
Hi James,

thank you, I am aware of [0], but as far as I understand it, it only tackles with the issue of automatically running jobs on page requests. "jobs will no longer run on page requests, and you must explicitly run runJobs.php to periodically run pending jobs."

There is no hint that there is a problem with runJobs.php - and this is what we're having problems with, it just won't do anything.

regards,
Bernhard


----- Ursprüngliche Mail -----

> Hi,
>
> As you are running MW 1.22.* [0], you may have an interest in reading [0].
>
> [0] https://www.wien.gv.at/wiki/index.php/Spezial:Version
>
> [1]
> https://www.mediawiki.org/wiki/Manual:Job_queue#Changes_introduced_in_MediaWiki_1.22
>
> Cheers
>
> On 9/24/14, Krabina Bernhard <[hidden email]> wrote:
> > Dear SMW users,
> >
> > this might not be a directly SMW related question, but maybe some of you
> > have experience with the job queue and job table?
> >
> > In the Vienna History wiki, jobs do not get done when running runJobs.php.
> > The Job table shows that the jobs currently in the queue are locked: there
> > are entries in the row "job_token" which results in the jobs not being done
> > by the script.
> >
> > How can I know why and what locked these jobs, how to unlock them or what
> > else to do?
> >
> > There have been 12400 jobs sitting around for a week:
> > https://www.wien.gv.at/wiki/api.php?action=query&meta=siteinfo&siprop=statistics&format=jsonfm
> > Suddenly, without anybody knowing why, ~ 5000 have been done, but > 7000
> > are
> > still sitting there
> >
> > showJobs.php always shows "0" and runJobs.php executes without error and
> > without doing anything.
> >
> > regards,
> > Bernhard
> >
> > ------------------------------------------------------------------------------
> > Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
> > Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
> > Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
> > Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
> > http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
> > _______________________________________________
> > Semediawiki-devel mailing list
> > [hidden email]
> > https://lists.sourceforge.net/lists/listinfo/semediawiki-devel
> >
>

------------------------------------------------------------------------------
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
_______________________________________________
Semediawiki-user mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/semediawiki-user
Reply | Threaded
Open this post in threaded view
|

Re: [SMW-devel] MediaWiki Job queue problem

James HK
Hi,

The likelihood that it is related to SMW 1.8.0.5 is rather slim and
looking at [0] did not reveal related issues.

> entries in the row "job_token" which results in the jobs not being done by the script.

According to [1], "job_token" is a "field that conveys process locks
on rows via process UUIDs"  but it doesn't help much either.

[0] https://bugzilla.wikimedia.org/buglist.cgi?quicksearch=runjobs
[1] https://www.mediawiki.org/wiki/Manual:Job_table

Cheers

On 9/24/14, Krabina Bernhard <[hidden email]> wrote:

> Hi James,
>
> thank you, I am aware of [0], but as far as I understand it, it only tackles
> with the issue of automatically running jobs on page requests. "jobs will no
> longer run on page requests, and you must explicitly run runJobs.php to
> periodically run pending jobs."
>
> There is no hint that there is a problem with runJobs.php - and this is what
> we're having problems with, it just won't do anything.
>
> regards,
> Bernhard
>
>
> ----- Ursprüngliche Mail -----
>> Hi,
>>
>> As you are running MW 1.22.* [0], you may have an interest in reading
>> [0].
>>
>> [0] https://www.wien.gv.at/wiki/index.php/Spezial:Version
>>
>> [1]
>> https://www.mediawiki.org/wiki/Manual:Job_queue#Changes_introduced_in_MediaWiki_1.22
>>
>> Cheers
>>
>> On 9/24/14, Krabina Bernhard <[hidden email]> wrote:
>> > Dear SMW users,
>> >
>> > this might not be a directly SMW related question, but maybe some of
>> > you
>> > have experience with the job queue and job table?
>> >
>> > In the Vienna History wiki, jobs do not get done when running
>> > runJobs.php.
>> > The Job table shows that the jobs currently in the queue are locked:
>> > there
>> > are entries in the row "job_token" which results in the jobs not being
>> > done
>> > by the script.
>> >
>> > How can I know why and what locked these jobs, how to unlock them or
>> > what
>> > else to do?
>> >
>> > There have been 12400 jobs sitting around for a week:
>> > https://www.wien.gv.at/wiki/api.php?action=query&meta=siteinfo&siprop=statistics&format=jsonfm
>> > Suddenly, without anybody knowing why, ~ 5000 have been done, but >
>> > 7000
>> > are
>> > still sitting there
>> >
>> > showJobs.php always shows "0" and runJobs.php executes without error
>> > and
>> > without doing anything.
>> >
>> > regards,
>> > Bernhard
>> >
>> > ------------------------------------------------------------------------------
>> > Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
>> > Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS
>> > Reports
>> > Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
>> > Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
>> > http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
>> > _______________________________________________
>> > Semediawiki-devel mailing list
>> > [hidden email]
>> > https://lists.sourceforge.net/lists/listinfo/semediawiki-devel
>> >
>>
>

------------------------------------------------------------------------------
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
_______________________________________________
Semediawiki-user mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/semediawiki-user
Reply | Threaded
Open this post in threaded view
|

Re: [SMW-devel] MediaWiki Job queue problem

Yaron Koren-2
Hi,

I believe the issue is the "job_attempts" field in the "job" table. I
believe each job is only attempted a certain number of times before
MediaWiki basically just gives up and ignores it. My guess is that that
column is greater than 0 for all the rows in the table; I think if you just
go into the database and call something like "UPDATE job SET job_attempts =
0", they will get run again.

-Yaron
------------------------------------------------------------------------------
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
_______________________________________________
Semediawiki-user mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/semediawiki-user
Reply | Threaded
Open this post in threaded view
|

Re: [SMW-devel] MediaWiki Job queue problem

James HK
Hi,

> column is greater than 0 for all the rows in the table; I think if you just
> go into the database and call something like "UPDATE job SET job_attempts =
> 0", they will get run again.

In case this solves the issue, I sincerely hope there is a different
way (a more standard way) to reset the "job_attempts" field other than
by using a SQL statement to manipulate the job table.

Cheers

On 9/25/14, Yaron Koren <[hidden email]> wrote:

> Hi,
>
> I believe the issue is the "job_attempts" field in the "job" table. I
> believe each job is only attempted a certain number of times before
> MediaWiki basically just gives up and ignores it. My guess is that that
> column is greater than 0 for all the rows in the table; I think if you just
> go into the database and call something like "UPDATE job SET job_attempts =
> 0", they will get run again.
>
> -Yaron
>

------------------------------------------------------------------------------
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
_______________________________________________
Semediawiki-user mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/semediawiki-user
Reply | Threaded
Open this post in threaded view
|

Re: [SMW-devel] MediaWiki Job queue problem

Yaron Koren-2
I certainly hope so too - or that there's some other standard way to get
previously-attempted jobs to be run again. I only know that I tried that
SQL trick once, and it worked. Perhaps this is another reason why the
question should have instead been sent to the mediawiki-l mailing list. :)

On Wed, Sep 24, 2014 at 11:35 AM, James HK <[hidden email]>
wrote:

> Hi,
>
> > column is greater than 0 for all the rows in the table; I think if you
> just
> > go into the database and call something like "UPDATE job SET
> job_attempts =
> > 0", they will get run again.
>
> In case this solves the issue, I sincerely hope there is a different
> way (a more standard way) to reset the "job_attempts" field other than
> by using a SQL statement to manipulate the job table.
>
> Cheers
>
> On 9/25/14, Yaron Koren <[hidden email]> wrote:
> > Hi,
> >
> > I believe the issue is the "job_attempts" field in the "job" table. I
> > believe each job is only attempted a certain number of times before
> > MediaWiki basically just gives up and ignores it. My guess is that that
> > column is greater than 0 for all the rows in the table; I think if you
> just
> > go into the database and call something like "UPDATE job SET
> job_attempts =
> > 0", they will get run again.
> >
> > -Yaron
> >
>



--
WikiWorks · MediaWiki Consulting · http://wikiworks.com
------------------------------------------------------------------------------
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
_______________________________________________
Semediawiki-user mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/semediawiki-user
Reply | Threaded
Open this post in threaded view
|

Re: [SMW-devel] MediaWiki Job queue problem

James Montalvo
I'm not sure if this is related, but on my wiki I'm occasionally getting
"stuck" jobs. I've only noticed this since upgrading to MW 1.23 and SMW 2.0
from 1.22/1.8.0.5.

What I mean by "stuck" is that the jobs don't get executed when I do
runJobs.php, but for some reason they keep attempting to run over and over.
runJobs.php will literally run forever. After the non-offending jobs are
cleared it's easy to see which are the offenders. Thus far I think all
offenders have been of type SMW::UpdateJob.

Is there some way to debug runJobs.php so I can provide better info?

--James
On Sep 24, 2014 10:55 AM, "Yaron Koren" <[hidden email]> wrote:

> I certainly hope so too - or that there's some other standard way to get
> previously-attempted jobs to be run again. I only know that I tried that
> SQL trick once, and it worked. Perhaps this is another reason why the
> question should have instead been sent to the mediawiki-l mailing list. :)
>
> On Wed, Sep 24, 2014 at 11:35 AM, James HK <[hidden email]>
> wrote:
>
> > Hi,
> >
> > > column is greater than 0 for all the rows in the table; I think if you
> > just
> > > go into the database and call something like "UPDATE job SET
> > job_attempts =
> > > 0", they will get run again.
> >
> > In case this solves the issue, I sincerely hope there is a different
> > way (a more standard way) to reset the "job_attempts" field other than
> > by using a SQL statement to manipulate the job table.
> >
> > Cheers
> >
> > On 9/25/14, Yaron Koren <[hidden email]> wrote:
> > > Hi,
> > >
> > > I believe the issue is the "job_attempts" field in the "job" table. I
> > > believe each job is only attempted a certain number of times before
> > > MediaWiki basically just gives up and ignores it. My guess is that that
> > > column is greater than 0 for all the rows in the table; I think if you
> > just
> > > go into the database and call something like "UPDATE job SET
> > job_attempts =
> > > 0", they will get run again.
> > >
> > > -Yaron
> > >
> >
>
>
>
> --
> WikiWorks · MediaWiki Consulting · http://wikiworks.com
>
> ------------------------------------------------------------------------------
> Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
> Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
> Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
> Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
>
> http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
> _______________________________________________
> Semediawiki-user mailing list
> [hidden email]
> https://lists.sourceforge.net/lists/listinfo/semediawiki-user
>
------------------------------------------------------------------------------
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
_______________________________________________
Semediawiki-user mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/semediawiki-user
Reply | Threaded
Open this post in threaded view
|

Re: [SMW-devel] MediaWiki Job queue problem

James HK
Hi,

> runJobs.php will literally run forever. After the non-offending jobs are
> cleared it's easy to see which are the offenders. Thus far I think all
> offenders have been of type SMW::UpdateJob.

I don't think the problem is with the `SMW\UpdateJob` because it does
a simple "shallow update" of the store while the management of job
status (including how many attempts, id's etc.) are done by the MW
JobQueue (which has first change in 1.22 and then again in 1.23).

It does beg the question whether all `SMW\UpdateJob`'s are "stuck" or
only certain jobs belonging to a group of pages or single page?

> runJobs.php, but for some reason they keep attempting to run over and over.

How do you know that the same job is run over and over again because
based and above discussion ("job_attempts") a job with too many
attempts is retired after some time.

If the same job is run over and over again, what is displayed for the
"job_attempts" counter?

[0] went into SMW 2.0 to counteract any possible job duplicates for
the same `root title`.

[0] https://github.com/SemanticMediaWiki/SemanticMediaWiki/pull/307

Cheers

On 9/25/14, James Montalvo <[hidden email]> wrote:

> I'm not sure if this is related, but on my wiki I'm occasionally getting
> "stuck" jobs. I've only noticed this since upgrading to MW 1.23 and SMW 2.0
> from 1.22/1.8.0.5.
>
> What I mean by "stuck" is that the jobs don't get executed when I do
> runJobs.php, but for some reason they keep attempting to run over and over.
> runJobs.php will literally run forever. After the non-offending jobs are
> cleared it's easy to see which are the offenders. Thus far I think all
> offenders have been of type SMW::UpdateJob.
>
> Is there some way to debug runJobs.php so I can provide better info?
>
> --James
> On Sep 24, 2014 10:55 AM, "Yaron Koren" <[hidden email]> wrote:
>
>> I certainly hope so too - or that there's some other standard way to get
>> previously-attempted jobs to be run again. I only know that I tried that
>> SQL trick once, and it worked. Perhaps this is another reason why the
>> question should have instead been sent to the mediawiki-l mailing list.
>> :)
>>
>> On Wed, Sep 24, 2014 at 11:35 AM, James HK <[hidden email]>
>> wrote:
>>
>> > Hi,
>> >
>> > > column is greater than 0 for all the rows in the table; I think if
>> > > you
>> > just
>> > > go into the database and call something like "UPDATE job SET
>> > job_attempts =
>> > > 0", they will get run again.
>> >
>> > In case this solves the issue, I sincerely hope there is a different
>> > way (a more standard way) to reset the "job_attempts" field other than
>> > by using a SQL statement to manipulate the job table.
>> >
>> > Cheers
>> >
>> > On 9/25/14, Yaron Koren <[hidden email]> wrote:
>> > > Hi,
>> > >
>> > > I believe the issue is the "job_attempts" field in the "job" table. I
>> > > believe each job is only attempted a certain number of times before
>> > > MediaWiki basically just gives up and ignores it. My guess is that
>> > > that
>> > > column is greater than 0 for all the rows in the table; I think if
>> > > you
>> > just
>> > > go into the database and call something like "UPDATE job SET
>> > job_attempts =
>> > > 0", they will get run again.
>> > >
>> > > -Yaron
>> > >
>> >
>>
>>
>>
>> --
>> WikiWorks · MediaWiki Consulting · http://wikiworks.com
>>
>> ------------------------------------------------------------------------------
>> Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
>> Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
>> Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
>> Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
>>
>> http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
>> _______________________________________________
>> Semediawiki-user mailing list
>> [hidden email]
>> https://lists.sourceforge.net/lists/listinfo/semediawiki-user
>>
>

------------------------------------------------------------------------------
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
_______________________________________________
Semediawiki-user mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/semediawiki-user
Reply | Threaded
Open this post in threaded view
|

Re: [SMW-devel] MediaWiki Job queue problem

Daren Welsh
We currently have five jobs that are "stuck". All of them have 1 for
job_attempts.

One has job_cmd of refreshLinks in job namespace 10 and it is for a
template page.
The other four have job_cmd of SMW\UpdateJob in job namespace 0 and are for
"standard" pages. These pages do not seem to be related based on category
or template.

On Wed, Sep 24, 2014 at 3:37 PM, James HK <[hidden email]>
wrote:

> Hi,
>
> > runJobs.php will literally run forever. After the non-offending jobs are
> > cleared it's easy to see which are the offenders. Thus far I think all
> > offenders have been of type SMW::UpdateJob.
>
> I don't think the problem is with the `SMW\UpdateJob` because it does
> a simple "shallow update" of the store while the management of job
> status (including how many attempts, id's etc.) are done by the MW
> JobQueue (which has first change in 1.22 and then again in 1.23).
>
> It does beg the question whether all `SMW\UpdateJob`'s are "stuck" or
> only certain jobs belonging to a group of pages or single page?
>
> > runJobs.php, but for some reason they keep attempting to run over and
> over.
>
> How do you know that the same job is run over and over again because
> based and above discussion ("job_attempts") a job with too many
> attempts is retired after some time.
>
> If the same job is run over and over again, what is displayed for the
> "job_attempts" counter?
>
> [0] went into SMW 2.0 to counteract any possible job duplicates for
> the same `root title`.
>
> [0] https://github.com/SemanticMediaWiki/SemanticMediaWiki/pull/307
>
> Cheers
>
> On 9/25/14, James Montalvo <[hidden email]> wrote:
> > I'm not sure if this is related, but on my wiki I'm occasionally getting
> > "stuck" jobs. I've only noticed this since upgrading to MW 1.23 and SMW
> 2.0
> > from 1.22/1.8.0.5.
> >
> > What I mean by "stuck" is that the jobs don't get executed when I do
> > runJobs.php, but for some reason they keep attempting to run over and
> over.
> > runJobs.php will literally run forever. After the non-offending jobs are
> > cleared it's easy to see which are the offenders. Thus far I think all
> > offenders have been of type SMW::UpdateJob.
> >
> > Is there some way to debug runJobs.php so I can provide better info?
> >
> > --James
> > On Sep 24, 2014 10:55 AM, "Yaron Koren" <[hidden email]> wrote:
> >
> >> I certainly hope so too - or that there's some other standard way to get
> >> previously-attempted jobs to be run again. I only know that I tried that
> >> SQL trick once, and it worked. Perhaps this is another reason why the
> >> question should have instead been sent to the mediawiki-l mailing list.
> >> :)
> >>
> >> On Wed, Sep 24, 2014 at 11:35 AM, James HK <
> [hidden email]>
> >> wrote:
> >>
> >> > Hi,
> >> >
> >> > > column is greater than 0 for all the rows in the table; I think if
> >> > > you
> >> > just
> >> > > go into the database and call something like "UPDATE job SET
> >> > job_attempts =
> >> > > 0", they will get run again.
> >> >
> >> > In case this solves the issue, I sincerely hope there is a different
> >> > way (a more standard way) to reset the "job_attempts" field other than
> >> > by using a SQL statement to manipulate the job table.
> >> >
> >> > Cheers
> >> >
> >> > On 9/25/14, Yaron Koren <[hidden email]> wrote:
> >> > > Hi,
> >> > >
> >> > > I believe the issue is the "job_attempts" field in the "job" table.
> I
> >> > > believe each job is only attempted a certain number of times before
> >> > > MediaWiki basically just gives up and ignores it. My guess is that
> >> > > that
> >> > > column is greater than 0 for all the rows in the table; I think if
> >> > > you
> >> > just
> >> > > go into the database and call something like "UPDATE job SET
> >> > job_attempts =
> >> > > 0", they will get run again.
> >> > >
> >> > > -Yaron
> >> > >
> >> >
> >>
> >>
> >>
> >> --
> >> WikiWorks · MediaWiki Consulting · http://wikiworks.com
> >>
> >>
> ------------------------------------------------------------------------------
> >> Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
> >> Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
> >> Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
> >> Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
> >>
> >>
> http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
> >> _______________________________________________
> >> Semediawiki-user mailing list
> >> [hidden email]
> >> https://lists.sourceforge.net/lists/listinfo/semediawiki-user
> >>
> >
>
>
> ------------------------------------------------------------------------------
> Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
> Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
> Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
> Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
>
> http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
> _______________________________________________
> Semediawiki-devel mailing list
> [hidden email]
> https://lists.sourceforge.net/lists/listinfo/semediawiki-devel
>



--
__________________
http://mixcloud.com/darenwelsh
http://www.beatportfolio.com
------------------------------------------------------------------------------
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
_______________________________________________
Semediawiki-user mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/semediawiki-user
Reply | Threaded
Open this post in threaded view
|

Re: [SMW-devel] MediaWiki Job queue problem

James HK
Hi,

> We currently have five jobs that are "stuck". All of them have 1 for
> job_attempts.
>
> One has job_cmd of refreshLinks in job namespace 10 and it is for a
> template page.
> The other four have job_cmd of SMW\UpdateJob in job namespace 0 and are for
> "standard" pages. These pages do not seem to be related based on category
> or template.

Just to make sure that I interpret the meaning of "stuck" correctly,
after finishing `runJobs` those four jobs (five with the
`refreshLinks` jobs) are still visible in the job table with an
"job_attempts" of 1. When running `runJobs` again the same four
`SMW\UpdateJob` (same as in the same title and same Id) jobs are
executed and increment the "job_attempts" to 2?

If you empty the job table and execute `runJobs` does the same five
jobs appear again after the run with "job_attempts" = 1?

Cheers

On 9/25/14, Daren Welsh <[hidden email]> wrote:

> We currently have five jobs that are "stuck". All of them have 1 for
> job_attempts.
>
> One has job_cmd of refreshLinks in job namespace 10 and it is for a
> template page.
> The other four have job_cmd of SMW\UpdateJob in job namespace 0 and are for
> "standard" pages. These pages do not seem to be related based on category
> or template.
>
> On Wed, Sep 24, 2014 at 3:37 PM, James HK <[hidden email]>
> wrote:
>
>> Hi,
>>
>> > runJobs.php will literally run forever. After the non-offending jobs
>> > are
>> > cleared it's easy to see which are the offenders. Thus far I think all
>> > offenders have been of type SMW::UpdateJob.
>>
>> I don't think the problem is with the `SMW\UpdateJob` because it does
>> a simple "shallow update" of the store while the management of job
>> status (including how many attempts, id's etc.) are done by the MW
>> JobQueue (which has first change in 1.22 and then again in 1.23).
>>
>> It does beg the question whether all `SMW\UpdateJob`'s are "stuck" or
>> only certain jobs belonging to a group of pages or single page?
>>
>> > runJobs.php, but for some reason they keep attempting to run over and
>> over.
>>
>> How do you know that the same job is run over and over again because
>> based and above discussion ("job_attempts") a job with too many
>> attempts is retired after some time.
>>
>> If the same job is run over and over again, what is displayed for the
>> "job_attempts" counter?
>>
>> [0] went into SMW 2.0 to counteract any possible job duplicates for
>> the same `root title`.
>>
>> [0] https://github.com/SemanticMediaWiki/SemanticMediaWiki/pull/307
>>
>> Cheers
>>
>> On 9/25/14, James Montalvo <[hidden email]> wrote:
>> > I'm not sure if this is related, but on my wiki I'm occasionally
>> > getting
>> > "stuck" jobs. I've only noticed this since upgrading to MW 1.23 and SMW
>> 2.0
>> > from 1.22/1.8.0.5.
>> >
>> > What I mean by "stuck" is that the jobs don't get executed when I do
>> > runJobs.php, but for some reason they keep attempting to run over and
>> over.
>> > runJobs.php will literally run forever. After the non-offending jobs
>> > are
>> > cleared it's easy to see which are the offenders. Thus far I think all
>> > offenders have been of type SMW::UpdateJob.
>> >
>> > Is there some way to debug runJobs.php so I can provide better info?
>> >
>> > --James
>> > On Sep 24, 2014 10:55 AM, "Yaron Koren" <[hidden email]> wrote:
>> >
>> >> I certainly hope so too - or that there's some other standard way to
>> >> get
>> >> previously-attempted jobs to be run again. I only know that I tried
>> >> that
>> >> SQL trick once, and it worked. Perhaps this is another reason why the
>> >> question should have instead been sent to the mediawiki-l mailing
>> >> list.
>> >> :)
>> >>
>> >> On Wed, Sep 24, 2014 at 11:35 AM, James HK <
>> [hidden email]>
>> >> wrote:
>> >>
>> >> > Hi,
>> >> >
>> >> > > column is greater than 0 for all the rows in the table; I think if
>> >> > > you
>> >> > just
>> >> > > go into the database and call something like "UPDATE job SET
>> >> > job_attempts =
>> >> > > 0", they will get run again.
>> >> >
>> >> > In case this solves the issue, I sincerely hope there is a different
>> >> > way (a more standard way) to reset the "job_attempts" field other
>> >> > than
>> >> > by using a SQL statement to manipulate the job table.
>> >> >
>> >> > Cheers
>> >> >
>> >> > On 9/25/14, Yaron Koren <[hidden email]> wrote:
>> >> > > Hi,
>> >> > >
>> >> > > I believe the issue is the "job_attempts" field in the "job"
>> >> > > table.
>> I
>> >> > > believe each job is only attempted a certain number of times
>> >> > > before
>> >> > > MediaWiki basically just gives up and ignores it. My guess is that
>> >> > > that
>> >> > > column is greater than 0 for all the rows in the table; I think if
>> >> > > you
>> >> > just
>> >> > > go into the database and call something like "UPDATE job SET
>> >> > job_attempts =
>> >> > > 0", they will get run again.
>> >> > >
>> >> > > -Yaron
>> >> > >
>> >> >
>> >>
>> >>
>> >>
>> >> --
>> >> WikiWorks · MediaWiki Consulting · http://wikiworks.com
>> >>
>> >>
>> ------------------------------------------------------------------------------
>> >> Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
>> >> Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS
>> >> Reports
>> >> Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
>> >> Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
>> >>
>> >>
>> http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
>> >> _______________________________________________
>> >> Semediawiki-user mailing list
>> >> [hidden email]
>> >> https://lists.sourceforge.net/lists/listinfo/semediawiki-user
>> >>
>> >
>>
>>
>> ------------------------------------------------------------------------------
>> Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
>> Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
>> Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
>> Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
>>
>> http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
>> _______________________________________________
>> Semediawiki-devel mailing list
>> [hidden email]
>> https://lists.sourceforge.net/lists/listinfo/semediawiki-devel
>>
>
>
>
> --
> __________________
> http://mixcloud.com/darenwelsh
> http://www.beatportfolio.com
>

------------------------------------------------------------------------------
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
_______________________________________________
Semediawiki-user mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/semediawiki-user
Reply | Threaded
Open this post in threaded view
|

Re: [SMW-devel] MediaWiki Job queue problem

Daren Welsh
I have executed runJobs several times and the job_attempts remains at 1 for
those five jobs. We were thinking of doing a database backup today, then
delete those five jobs from the table, then run the SMW "repair and
upgrade" via the admin special page.

Even if this clears the job queue, we'd like to understand what caused this
in the first place. I realize that's a very open-ended question :)

Daren


On Wed, Sep 24, 2014 at 4:30 PM, James HK <[hidden email]>
wrote:

> Hi,
>
> > We currently have five jobs that are "stuck". All of them have 1 for
> > job_attempts.
> >
> > One has job_cmd of refreshLinks in job namespace 10 and it is for a
> > template page.
> > The other four have job_cmd of SMW\UpdateJob in job namespace 0 and are
> for
> > "standard" pages. These pages do not seem to be related based on category
> > or template.
>
> Just to make sure that I interpret the meaning of "stuck" correctly,
> after finishing `runJobs` those four jobs (five with the
> `refreshLinks` jobs) are still visible in the job table with an
> "job_attempts" of 1. When running `runJobs` again the same four
> `SMW\UpdateJob` (same as in the same title and same Id) jobs are
> executed and increment the "job_attempts" to 2?
>
> If you empty the job table and execute `runJobs` does the same five
> jobs appear again after the run with "job_attempts" = 1?
>
> Cheers
>
> On 9/25/14, Daren Welsh <[hidden email]> wrote:
> > We currently have five jobs that are "stuck". All of them have 1 for
> > job_attempts.
> >
> > One has job_cmd of refreshLinks in job namespace 10 and it is for a
> > template page.
> > The other four have job_cmd of SMW\UpdateJob in job namespace 0 and are
> for
> > "standard" pages. These pages do not seem to be related based on category
> > or template.
> >
> > On Wed, Sep 24, 2014 at 3:37 PM, James HK <[hidden email]>
> > wrote:
> >
> >> Hi,
> >>
> >> > runJobs.php will literally run forever. After the non-offending jobs
> >> > are
> >> > cleared it's easy to see which are the offenders. Thus far I think all
> >> > offenders have been of type SMW::UpdateJob.
> >>
> >> I don't think the problem is with the `SMW\UpdateJob` because it does
> >> a simple "shallow update" of the store while the management of job
> >> status (including how many attempts, id's etc.) are done by the MW
> >> JobQueue (which has first change in 1.22 and then again in 1.23).
> >>
> >> It does beg the question whether all `SMW\UpdateJob`'s are "stuck" or
> >> only certain jobs belonging to a group of pages or single page?
> >>
> >> > runJobs.php, but for some reason they keep attempting to run over and
> >> over.
> >>
> >> How do you know that the same job is run over and over again because
> >> based and above discussion ("job_attempts") a job with too many
> >> attempts is retired after some time.
> >>
> >> If the same job is run over and over again, what is displayed for the
> >> "job_attempts" counter?
> >>
> >> [0] went into SMW 2.0 to counteract any possible job duplicates for
> >> the same `root title`.
> >>
> >> [0] https://github.com/SemanticMediaWiki/SemanticMediaWiki/pull/307
> >>
> >> Cheers
> >>
> >> On 9/25/14, James Montalvo <[hidden email]> wrote:
> >> > I'm not sure if this is related, but on my wiki I'm occasionally
> >> > getting
> >> > "stuck" jobs. I've only noticed this since upgrading to MW 1.23 and
> SMW
> >> 2.0
> >> > from 1.22/1.8.0.5.
> >> >
> >> > What I mean by "stuck" is that the jobs don't get executed when I do
> >> > runJobs.php, but for some reason they keep attempting to run over and
> >> over.
> >> > runJobs.php will literally run forever. After the non-offending jobs
> >> > are
> >> > cleared it's easy to see which are the offenders. Thus far I think all
> >> > offenders have been of type SMW::UpdateJob.
> >> >
> >> > Is there some way to debug runJobs.php so I can provide better info?
> >> >
> >> > --James
> >> > On Sep 24, 2014 10:55 AM, "Yaron Koren" <[hidden email]> wrote:
> >> >
> >> >> I certainly hope so too - or that there's some other standard way to
> >> >> get
> >> >> previously-attempted jobs to be run again. I only know that I tried
> >> >> that
> >> >> SQL trick once, and it worked. Perhaps this is another reason why the
> >> >> question should have instead been sent to the mediawiki-l mailing
> >> >> list.
> >> >> :)
> >> >>
> >> >> On Wed, Sep 24, 2014 at 11:35 AM, James HK <
> >> [hidden email]>
> >> >> wrote:
> >> >>
> >> >> > Hi,
> >> >> >
> >> >> > > column is greater than 0 for all the rows in the table; I think
> if
> >> >> > > you
> >> >> > just
> >> >> > > go into the database and call something like "UPDATE job SET
> >> >> > job_attempts =
> >> >> > > 0", they will get run again.
> >> >> >
> >> >> > In case this solves the issue, I sincerely hope there is a
> different
> >> >> > way (a more standard way) to reset the "job_attempts" field other
> >> >> > than
> >> >> > by using a SQL statement to manipulate the job table.
> >> >> >
> >> >> > Cheers
> >> >> >
> >> >> > On 9/25/14, Yaron Koren <[hidden email]> wrote:
> >> >> > > Hi,
> >> >> > >
> >> >> > > I believe the issue is the "job_attempts" field in the "job"
> >> >> > > table.
> >> I
> >> >> > > believe each job is only attempted a certain number of times
> >> >> > > before
> >> >> > > MediaWiki basically just gives up and ignores it. My guess is
> that
> >> >> > > that
> >> >> > > column is greater than 0 for all the rows in the table; I think
> if
> >> >> > > you
> >> >> > just
> >> >> > > go into the database and call something like "UPDATE job SET
> >> >> > job_attempts =
> >> >> > > 0", they will get run again.
> >> >> > >
> >> >> > > -Yaron
> >> >> > >
> >> >> >
> >> >>
> >> >>
> >> >>
> >> >> --
> >> >> WikiWorks · MediaWiki Consulting · http://wikiworks.com
> >> >>
> >> >>
> >>
> ------------------------------------------------------------------------------
> >> >> Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
> >> >> Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS
> >> >> Reports
> >> >> Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
> >> >> Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
> >> >>
> >> >>
> >>
> http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
> >> >> _______________________________________________
> >> >> Semediawiki-user mailing list
> >> >> [hidden email]
> >> >> https://lists.sourceforge.net/lists/listinfo/semediawiki-user
> >> >>
> >> >
> >>
> >>
> >>
> ------------------------------------------------------------------------------
> >> Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
> >> Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
> >> Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
> >> Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
> >>
> >>
> http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
> >> _______________________________________________
> >> Semediawiki-devel mailing list
> >> [hidden email]
> >> https://lists.sourceforge.net/lists/listinfo/semediawiki-devel
> >>
> >
> >
> >
> > --
> > __________________
> > http://mixcloud.com/darenwelsh
> > http://www.beatportfolio.com
> >
>



--
__________________
http://mixcloud.com/darenwelsh
http://www.beatportfolio.com
------------------------------------------------------------------------------
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
_______________________________________________
Semediawiki-user mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/semediawiki-user
Reply | Threaded
Open this post in threaded view
|

Re: [SMW-devel] MediaWiki Job queue problem

James Montalvo
Daren and I work together, so we have the same issue. One thing to add: Run
jobs never finishes. Those "stuck" jobs just keep repeating over and over.

On Wed, Sep 24, 2014 at 4:51 PM, Daren Welsh <[hidden email]> wrote:

> I have executed runJobs several times and the job_attempts remains at 1
> for those five jobs. We were thinking of doing a database backup today,
> then delete those five jobs from the table, then run the SMW "repair and
> upgrade" via the admin special page.
>
> Even if this clears the job queue, we'd like to understand what caused
> this in the first place. I realize that's a very open-ended question :)
>
> Daren
>
>
> On Wed, Sep 24, 2014 at 4:30 PM, James HK <[hidden email]>
> wrote:
>
>> Hi,
>>
>> > We currently have five jobs that are "stuck". All of them have 1 for
>> > job_attempts.
>> >
>> > One has job_cmd of refreshLinks in job namespace 10 and it is for a
>> > template page.
>> > The other four have job_cmd of SMW\UpdateJob in job namespace 0 and are
>> for
>> > "standard" pages. These pages do not seem to be related based on
>> category
>> > or template.
>>
>> Just to make sure that I interpret the meaning of "stuck" correctly,
>> after finishing `runJobs` those four jobs (five with the
>> `refreshLinks` jobs) are still visible in the job table with an
>> "job_attempts" of 1. When running `runJobs` again the same four
>> `SMW\UpdateJob` (same as in the same title and same Id) jobs are
>> executed and increment the "job_attempts" to 2?
>>
>> If you empty the job table and execute `runJobs` does the same five
>> jobs appear again after the run with "job_attempts" = 1?
>>
>> Cheers
>>
>> On 9/25/14, Daren Welsh <[hidden email]> wrote:
>> > We currently have five jobs that are "stuck". All of them have 1 for
>> > job_attempts.
>> >
>> > One has job_cmd of refreshLinks in job namespace 10 and it is for a
>> > template page.
>> > The other four have job_cmd of SMW\UpdateJob in job namespace 0 and are
>> for
>> > "standard" pages. These pages do not seem to be related based on
>> category
>> > or template.
>> >
>> > On Wed, Sep 24, 2014 at 3:37 PM, James HK <[hidden email]
>> >
>> > wrote:
>> >
>> >> Hi,
>> >>
>> >> > runJobs.php will literally run forever. After the non-offending jobs
>> >> > are
>> >> > cleared it's easy to see which are the offenders. Thus far I think
>> all
>> >> > offenders have been of type SMW::UpdateJob.
>> >>
>> >> I don't think the problem is with the `SMW\UpdateJob` because it does
>> >> a simple "shallow update" of the store while the management of job
>> >> status (including how many attempts, id's etc.) are done by the MW
>> >> JobQueue (which has first change in 1.22 and then again in 1.23).
>> >>
>> >> It does beg the question whether all `SMW\UpdateJob`'s are "stuck" or
>> >> only certain jobs belonging to a group of pages or single page?
>> >>
>> >> > runJobs.php, but for some reason they keep attempting to run over and
>> >> over.
>> >>
>> >> How do you know that the same job is run over and over again because
>> >> based and above discussion ("job_attempts") a job with too many
>> >> attempts is retired after some time.
>> >>
>> >> If the same job is run over and over again, what is displayed for the
>> >> "job_attempts" counter?
>> >>
>> >> [0] went into SMW 2.0 to counteract any possible job duplicates for
>> >> the same `root title`.
>> >>
>> >> [0] https://github.com/SemanticMediaWiki/SemanticMediaWiki/pull/307
>> >>
>> >> Cheers
>> >>
>> >> On 9/25/14, James Montalvo <[hidden email]> wrote:
>> >> > I'm not sure if this is related, but on my wiki I'm occasionally
>> >> > getting
>> >> > "stuck" jobs. I've only noticed this since upgrading to MW 1.23 and
>> SMW
>> >> 2.0
>> >> > from 1.22/1.8.0.5.
>> >> >
>> >> > What I mean by "stuck" is that the jobs don't get executed when I do
>> >> > runJobs.php, but for some reason they keep attempting to run over and
>> >> over.
>> >> > runJobs.php will literally run forever. After the non-offending jobs
>> >> > are
>> >> > cleared it's easy to see which are the offenders. Thus far I think
>> all
>> >> > offenders have been of type SMW::UpdateJob.
>> >> >
>> >> > Is there some way to debug runJobs.php so I can provide better info?
>> >> >
>> >> > --James
>> >> > On Sep 24, 2014 10:55 AM, "Yaron Koren" <[hidden email]> wrote:
>> >> >
>> >> >> I certainly hope so too - or that there's some other standard way to
>> >> >> get
>> >> >> previously-attempted jobs to be run again. I only know that I tried
>> >> >> that
>> >> >> SQL trick once, and it worked. Perhaps this is another reason why
>> the
>> >> >> question should have instead been sent to the mediawiki-l mailing
>> >> >> list.
>> >> >> :)
>> >> >>
>> >> >> On Wed, Sep 24, 2014 at 11:35 AM, James HK <
>> >> [hidden email]>
>> >> >> wrote:
>> >> >>
>> >> >> > Hi,
>> >> >> >
>> >> >> > > column is greater than 0 for all the rows in the table; I think
>> if
>> >> >> > > you
>> >> >> > just
>> >> >> > > go into the database and call something like "UPDATE job SET
>> >> >> > job_attempts =
>> >> >> > > 0", they will get run again.
>> >> >> >
>> >> >> > In case this solves the issue, I sincerely hope there is a
>> different
>> >> >> > way (a more standard way) to reset the "job_attempts" field other
>> >> >> > than
>> >> >> > by using a SQL statement to manipulate the job table.
>> >> >> >
>> >> >> > Cheers
>> >> >> >
>> >> >> > On 9/25/14, Yaron Koren <[hidden email]> wrote:
>> >> >> > > Hi,
>> >> >> > >
>> >> >> > > I believe the issue is the "job_attempts" field in the "job"
>> >> >> > > table.
>> >> I
>> >> >> > > believe each job is only attempted a certain number of times
>> >> >> > > before
>> >> >> > > MediaWiki basically just gives up and ignores it. My guess is
>> that
>> >> >> > > that
>> >> >> > > column is greater than 0 for all the rows in the table; I think
>> if
>> >> >> > > you
>> >> >> > just
>> >> >> > > go into the database and call something like "UPDATE job SET
>> >> >> > job_attempts =
>> >> >> > > 0", they will get run again.
>> >> >> > >
>> >> >> > > -Yaron
>> >> >> > >
>> >> >> >
>> >> >>
>> >> >>
>> >> >>
>> >> >> --
>> >> >> WikiWorks · MediaWiki Consulting · http://wikiworks.com
>> >> >>
>> >> >>
>> >>
>> ------------------------------------------------------------------------------
>> >> >> Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
>> >> >> Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS
>> >> >> Reports
>> >> >> Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
>> >> >> Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
>> >> >>
>> >> >>
>> >>
>> http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
>> >> >> _______________________________________________
>> >> >> Semediawiki-user mailing list
>> >> >> [hidden email]
>> >> >> https://lists.sourceforge.net/lists/listinfo/semediawiki-user
>> >> >>
>> >> >
>> >>
>> >>
>> >>
>> ------------------------------------------------------------------------------
>> >> Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
>> >> Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS
>> Reports
>> >> Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
>> >> Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
>> >>
>> >>
>> http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
>> >> _______________________________________________
>> >> Semediawiki-devel mailing list
>> >> [hidden email]
>> >> https://lists.sourceforge.net/lists/listinfo/semediawiki-devel
>> >>
>> >
>> >
>> >
>> > --
>> > __________________
>> > http://mixcloud.com/darenwelsh
>> > http://www.beatportfolio.com
>> >
>>
>
>
>
> --
> __________________
> http://mixcloud.com/darenwelsh
> http://www.beatportfolio.com
>
------------------------------------------------------------------------------
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
_______________________________________________
Semediawiki-user mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/semediawiki-user
Reply | Threaded
Open this post in threaded view
|

Re: [SMW-devel] MediaWiki Job queue problem

James Montalvo
Correction: runJobs evidently will eventually stop (or time-out) but only
after repeating certain jobs hundreds of times. I personally have never had
the patience to see that happen, but Daren evidently has.

On Wed, Sep 24, 2014 at 4:55 PM, James Montalvo <[hidden email]>
wrote:

> Daren and I work together, so we have the same issue. One thing to add: Run
> jobs never finishes. Those "stuck" jobs just keep repeating over and over.
>
> On Wed, Sep 24, 2014 at 4:51 PM, Daren Welsh <[hidden email]> wrote:
>
>> I have executed runJobs several times and the job_attempts remains at 1
>> for those five jobs. We were thinking of doing a database backup today,
>> then delete those five jobs from the table, then run the SMW "repair and
>> upgrade" via the admin special page.
>>
>> Even if this clears the job queue, we'd like to understand what caused
>> this in the first place. I realize that's a very open-ended question :)
>>
>> Daren
>>
>>
>> On Wed, Sep 24, 2014 at 4:30 PM, James HK <[hidden email]>
>> wrote:
>>
>>> Hi,
>>>
>>> > We currently have five jobs that are "stuck". All of them have 1 for
>>> > job_attempts.
>>> >
>>> > One has job_cmd of refreshLinks in job namespace 10 and it is for a
>>> > template page.
>>> > The other four have job_cmd of SMW\UpdateJob in job namespace 0 and
>>> are for
>>> > "standard" pages. These pages do not seem to be related based on
>>> category
>>> > or template.
>>>
>>> Just to make sure that I interpret the meaning of "stuck" correctly,
>>> after finishing `runJobs` those four jobs (five with the
>>> `refreshLinks` jobs) are still visible in the job table with an
>>> "job_attempts" of 1. When running `runJobs` again the same four
>>> `SMW\UpdateJob` (same as in the same title and same Id) jobs are
>>> executed and increment the "job_attempts" to 2?
>>>
>>> If you empty the job table and execute `runJobs` does the same five
>>> jobs appear again after the run with "job_attempts" = 1?
>>>
>>> Cheers
>>>
>>> On 9/25/14, Daren Welsh <[hidden email]> wrote:
>>> > We currently have five jobs that are "stuck". All of them have 1 for
>>> > job_attempts.
>>> >
>>> > One has job_cmd of refreshLinks in job namespace 10 and it is for a
>>> > template page.
>>> > The other four have job_cmd of SMW\UpdateJob in job namespace 0 and
>>> are for
>>> > "standard" pages. These pages do not seem to be related based on
>>> category
>>> > or template.
>>> >
>>> > On Wed, Sep 24, 2014 at 3:37 PM, James HK <
>>> [hidden email]>
>>> > wrote:
>>> >
>>> >> Hi,
>>> >>
>>> >> > runJobs.php will literally run forever. After the non-offending jobs
>>> >> > are
>>> >> > cleared it's easy to see which are the offenders. Thus far I think
>>> all
>>> >> > offenders have been of type SMW::UpdateJob.
>>> >>
>>> >> I don't think the problem is with the `SMW\UpdateJob` because it does
>>> >> a simple "shallow update" of the store while the management of job
>>> >> status (including how many attempts, id's etc.) are done by the MW
>>> >> JobQueue (which has first change in 1.22 and then again in 1.23).
>>> >>
>>> >> It does beg the question whether all `SMW\UpdateJob`'s are "stuck" or
>>> >> only certain jobs belonging to a group of pages or single page?
>>> >>
>>> >> > runJobs.php, but for some reason they keep attempting to run over
>>> and
>>> >> over.
>>> >>
>>> >> How do you know that the same job is run over and over again because
>>> >> based and above discussion ("job_attempts") a job with too many
>>> >> attempts is retired after some time.
>>> >>
>>> >> If the same job is run over and over again, what is displayed for the
>>> >> "job_attempts" counter?
>>> >>
>>> >> [0] went into SMW 2.0 to counteract any possible job duplicates for
>>> >> the same `root title`.
>>> >>
>>> >> [0] https://github.com/SemanticMediaWiki/SemanticMediaWiki/pull/307
>>> >>
>>> >> Cheers
>>> >>
>>> >> On 9/25/14, James Montalvo <[hidden email]> wrote:
>>> >> > I'm not sure if this is related, but on my wiki I'm occasionally
>>> >> > getting
>>> >> > "stuck" jobs. I've only noticed this since upgrading to MW 1.23 and
>>> SMW
>>> >> 2.0
>>> >> > from 1.22/1.8.0.5.
>>> >> >
>>> >> > What I mean by "stuck" is that the jobs don't get executed when I do
>>> >> > runJobs.php, but for some reason they keep attempting to run over
>>> and
>>> >> over.
>>> >> > runJobs.php will literally run forever. After the non-offending jobs
>>> >> > are
>>> >> > cleared it's easy to see which are the offenders. Thus far I think
>>> all
>>> >> > offenders have been of type SMW::UpdateJob.
>>> >> >
>>> >> > Is there some way to debug runJobs.php so I can provide better info?
>>> >> >
>>> >> > --James
>>> >> > On Sep 24, 2014 10:55 AM, "Yaron Koren" <[hidden email]>
>>> wrote:
>>> >> >
>>> >> >> I certainly hope so too - or that there's some other standard way
>>> to
>>> >> >> get
>>> >> >> previously-attempted jobs to be run again. I only know that I tried
>>> >> >> that
>>> >> >> SQL trick once, and it worked. Perhaps this is another reason why
>>> the
>>> >> >> question should have instead been sent to the mediawiki-l mailing
>>> >> >> list.
>>> >> >> :)
>>> >> >>
>>> >> >> On Wed, Sep 24, 2014 at 11:35 AM, James HK <
>>> >> [hidden email]>
>>> >> >> wrote:
>>> >> >>
>>> >> >> > Hi,
>>> >> >> >
>>> >> >> > > column is greater than 0 for all the rows in the table; I
>>> think if
>>> >> >> > > you
>>> >> >> > just
>>> >> >> > > go into the database and call something like "UPDATE job SET
>>> >> >> > job_attempts =
>>> >> >> > > 0", they will get run again.
>>> >> >> >
>>> >> >> > In case this solves the issue, I sincerely hope there is a
>>> different
>>> >> >> > way (a more standard way) to reset the "job_attempts" field other
>>> >> >> > than
>>> >> >> > by using a SQL statement to manipulate the job table.
>>> >> >> >
>>> >> >> > Cheers
>>> >> >> >
>>> >> >> > On 9/25/14, Yaron Koren <[hidden email]> wrote:
>>> >> >> > > Hi,
>>> >> >> > >
>>> >> >> > > I believe the issue is the "job_attempts" field in the "job"
>>> >> >> > > table.
>>> >> I
>>> >> >> > > believe each job is only attempted a certain number of times
>>> >> >> > > before
>>> >> >> > > MediaWiki basically just gives up and ignores it. My guess is
>>> that
>>> >> >> > > that
>>> >> >> > > column is greater than 0 for all the rows in the table; I
>>> think if
>>> >> >> > > you
>>> >> >> > just
>>> >> >> > > go into the database and call something like "UPDATE job SET
>>> >> >> > job_attempts =
>>> >> >> > > 0", they will get run again.
>>> >> >> > >
>>> >> >> > > -Yaron
>>> >> >> > >
>>> >> >> >
>>> >> >>
>>> >> >>
>>> >> >>
>>> >> >> --
>>> >> >> WikiWorks · MediaWiki Consulting · http://wikiworks.com
>>> >> >>
>>> >> >>
>>> >>
>>> ------------------------------------------------------------------------------
>>> >> >> Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
>>> >> >> Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS
>>> >> >> Reports
>>> >> >> Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White
>>> paper
>>> >> >> Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog
>>> Analyzer
>>> >> >>
>>> >> >>
>>> >>
>>> http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
>>> >> >> _______________________________________________
>>> >> >> Semediawiki-user mailing list
>>> >> >> [hidden email]
>>> >> >> https://lists.sourceforge.net/lists/listinfo/semediawiki-user
>>> >> >>
>>> >> >
>>> >>
>>> >>
>>> >>
>>> ------------------------------------------------------------------------------
>>> >> Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
>>> >> Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS
>>> Reports
>>> >> Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
>>> >> Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
>>> >>
>>> >>
>>> http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
>>> >> _______________________________________________
>>> >> Semediawiki-devel mailing list
>>> >> [hidden email]
>>> >> https://lists.sourceforge.net/lists/listinfo/semediawiki-devel
>>> >>
>>> >
>>> >
>>> >
>>> > --
>>> > __________________
>>> > http://mixcloud.com/darenwelsh
>>> > http://www.beatportfolio.com
>>> >
>>>
>>
>>
>>
>> --
>> __________________
>> http://mixcloud.com/darenwelsh
>> http://www.beatportfolio.com
>>
>
>
------------------------------------------------------------------------------
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
_______________________________________________
Semediawiki-user mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/semediawiki-user
Reply | Threaded
Open this post in threaded view
|

Re: [SMW-devel] MediaWiki Job queue problem

James HK
In reply to this post by Daren Welsh
Hi,

> I have executed runJobs several times and the job_attempts remains at 1 for
> those five jobs. We were thinking of doing a database backup today, then

I'm curious about the "job_attempts" field as I would have expected to
see an increment for when the job (actually there has been an attempt
to execute and not only display a line on command shell) and to see
whether the job actually gets execute when running `runJobs`, just add
a simple `var_dump( 'hello world' )` line to [0] and verify a
`SMW\UpdateJob` activity.

[0] https://github.com/SemanticMediaWiki/SemanticMediaWiki/blob/master/includes/src/MediaWiki/Jobs/UpdateJob.php#L118

Cheers

On 9/25/14, Daren Welsh <[hidden email]> wrote:

> I have executed runJobs several times and the job_attempts remains at 1 for
> those five jobs. We were thinking of doing a database backup today, then
> delete those five jobs from the table, then run the SMW "repair and
> upgrade" via the admin special page.
>
> Even if this clears the job queue, we'd like to understand what caused this
> in the first place. I realize that's a very open-ended question :)
>
> Daren
>
>
> On Wed, Sep 24, 2014 at 4:30 PM, James HK <[hidden email]>
> wrote:
>
>> Hi,
>>
>> > We currently have five jobs that are "stuck". All of them have 1 for
>> > job_attempts.
>> >
>> > One has job_cmd of refreshLinks in job namespace 10 and it is for a
>> > template page.
>> > The other four have job_cmd of SMW\UpdateJob in job namespace 0 and are
>> for
>> > "standard" pages. These pages do not seem to be related based on
>> > category
>> > or template.
>>
>> Just to make sure that I interpret the meaning of "stuck" correctly,
>> after finishing `runJobs` those four jobs (five with the
>> `refreshLinks` jobs) are still visible in the job table with an
>> "job_attempts" of 1. When running `runJobs` again the same four
>> `SMW\UpdateJob` (same as in the same title and same Id) jobs are
>> executed and increment the "job_attempts" to 2?
>>
>> If you empty the job table and execute `runJobs` does the same five
>> jobs appear again after the run with "job_attempts" = 1?
>>
>> Cheers
>>
>> On 9/25/14, Daren Welsh <[hidden email]> wrote:
>> > We currently have five jobs that are "stuck". All of them have 1 for
>> > job_attempts.
>> >
>> > One has job_cmd of refreshLinks in job namespace 10 and it is for a
>> > template page.
>> > The other four have job_cmd of SMW\UpdateJob in job namespace 0 and are
>> for
>> > "standard" pages. These pages do not seem to be related based on
>> > category
>> > or template.
>> >
>> > On Wed, Sep 24, 2014 at 3:37 PM, James HK
>> > <[hidden email]>
>> > wrote:
>> >
>> >> Hi,
>> >>
>> >> > runJobs.php will literally run forever. After the non-offending jobs
>> >> > are
>> >> > cleared it's easy to see which are the offenders. Thus far I think
>> >> > all
>> >> > offenders have been of type SMW::UpdateJob.
>> >>
>> >> I don't think the problem is with the `SMW\UpdateJob` because it does
>> >> a simple "shallow update" of the store while the management of job
>> >> status (including how many attempts, id's etc.) are done by the MW
>> >> JobQueue (which has first change in 1.22 and then again in 1.23).
>> >>
>> >> It does beg the question whether all `SMW\UpdateJob`'s are "stuck" or
>> >> only certain jobs belonging to a group of pages or single page?
>> >>
>> >> > runJobs.php, but for some reason they keep attempting to run over
>> >> > and
>> >> over.
>> >>
>> >> How do you know that the same job is run over and over again because
>> >> based and above discussion ("job_attempts") a job with too many
>> >> attempts is retired after some time.
>> >>
>> >> If the same job is run over and over again, what is displayed for the
>> >> "job_attempts" counter?
>> >>
>> >> [0] went into SMW 2.0 to counteract any possible job duplicates for
>> >> the same `root title`.
>> >>
>> >> [0] https://github.com/SemanticMediaWiki/SemanticMediaWiki/pull/307
>> >>
>> >> Cheers
>> >>
>> >> On 9/25/14, James Montalvo <[hidden email]> wrote:
>> >> > I'm not sure if this is related, but on my wiki I'm occasionally
>> >> > getting
>> >> > "stuck" jobs. I've only noticed this since upgrading to MW 1.23 and
>> SMW
>> >> 2.0
>> >> > from 1.22/1.8.0.5.
>> >> >
>> >> > What I mean by "stuck" is that the jobs don't get executed when I do
>> >> > runJobs.php, but for some reason they keep attempting to run over
>> >> > and
>> >> over.
>> >> > runJobs.php will literally run forever. After the non-offending jobs
>> >> > are
>> >> > cleared it's easy to see which are the offenders. Thus far I think
>> >> > all
>> >> > offenders have been of type SMW::UpdateJob.
>> >> >
>> >> > Is there some way to debug runJobs.php so I can provide better info?
>> >> >
>> >> > --James
>> >> > On Sep 24, 2014 10:55 AM, "Yaron Koren" <[hidden email]> wrote:
>> >> >
>> >> >> I certainly hope so too - or that there's some other standard way
>> >> >> to
>> >> >> get
>> >> >> previously-attempted jobs to be run again. I only know that I tried
>> >> >> that
>> >> >> SQL trick once, and it worked. Perhaps this is another reason why
>> >> >> the
>> >> >> question should have instead been sent to the mediawiki-l mailing
>> >> >> list.
>> >> >> :)
>> >> >>
>> >> >> On Wed, Sep 24, 2014 at 11:35 AM, James HK <
>> >> [hidden email]>
>> >> >> wrote:
>> >> >>
>> >> >> > Hi,
>> >> >> >
>> >> >> > > column is greater than 0 for all the rows in the table; I think
>> if
>> >> >> > > you
>> >> >> > just
>> >> >> > > go into the database and call something like "UPDATE job SET
>> >> >> > job_attempts =
>> >> >> > > 0", they will get run again.
>> >> >> >
>> >> >> > In case this solves the issue, I sincerely hope there is a
>> different
>> >> >> > way (a more standard way) to reset the "job_attempts" field other
>> >> >> > than
>> >> >> > by using a SQL statement to manipulate the job table.
>> >> >> >
>> >> >> > Cheers
>> >> >> >
>> >> >> > On 9/25/14, Yaron Koren <[hidden email]> wrote:
>> >> >> > > Hi,
>> >> >> > >
>> >> >> > > I believe the issue is the "job_attempts" field in the "job"
>> >> >> > > table.
>> >> I
>> >> >> > > believe each job is only attempted a certain number of times
>> >> >> > > before
>> >> >> > > MediaWiki basically just gives up and ignores it. My guess is
>> that
>> >> >> > > that
>> >> >> > > column is greater than 0 for all the rows in the table; I think
>> if
>> >> >> > > you
>> >> >> > just
>> >> >> > > go into the database and call something like "UPDATE job SET
>> >> >> > job_attempts =
>> >> >> > > 0", they will get run again.
>> >> >> > >
>> >> >> > > -Yaron
>> >> >> > >
>> >> >> >
>> >> >>
>> >> >>
>> >> >>
>> >> >> --
>> >> >> WikiWorks · MediaWiki Consulting · http://wikiworks.com
>> >> >>
>> >> >>
>> >>
>> ------------------------------------------------------------------------------
>> >> >> Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
>> >> >> Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS
>> >> >> Reports
>> >> >> Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White
>> >> >> paper
>> >> >> Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog
>> >> >> Analyzer
>> >> >>
>> >> >>
>> >>
>> http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
>> >> >> _______________________________________________
>> >> >> Semediawiki-user mailing list
>> >> >> [hidden email]
>> >> >> https://lists.sourceforge.net/lists/listinfo/semediawiki-user
>> >> >>
>> >> >
>> >>
>> >>
>> >>
>> ------------------------------------------------------------------------------
>> >> Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
>> >> Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS
>> >> Reports
>> >> Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
>> >> Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
>> >>
>> >>
>> http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
>> >> _______________________________________________
>> >> Semediawiki-devel mailing list
>> >> [hidden email]
>> >> https://lists.sourceforge.net/lists/listinfo/semediawiki-devel
>> >>
>> >
>> >
>> >
>> > --
>> > __________________
>> > http://mixcloud.com/darenwelsh
>> > http://www.beatportfolio.com
>> >
>>
>
>
>
> --
> __________________
> http://mixcloud.com/darenwelsh
> http://www.beatportfolio.com
>

------------------------------------------------------------------------------
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
_______________________________________________
Semediawiki-user mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/semediawiki-user
Reply | Threaded
Open this post in threaded view
|

Re: [SMW-devel] MediaWiki Job queue problem

Phil Legault
In reply to this post by James Montalvo
I had the same issue a while ago after I restored a database.
I figured it was because when I ran the backup there where jobs in the queue and after the restore they kept running over and over and etc.

I ran another backup from my production database after I cleared the runjobs, and this worked out fine after.



-----Original Message-----
From: James Montalvo [mailto:[hidden email]]
Sent: Wednesday, September 24, 2014 5:55 PM
To: Daren Welsh
Cc: Semantic MediaWiki developers; Yaron Koren; Semantic MediaWiki users
Subject: Re: [Semediawiki-user] [SMW-devel] MediaWiki Job queue problem

Daren and I work together, so we have the same issue. One thing to add: Run jobs never finishes. Those "stuck" jobs just keep repeating over and over.

On Wed, Sep 24, 2014 at 4:51 PM, Daren Welsh <[hidden email]> wrote:

> I have executed runJobs several times and the job_attempts remains at
> 1 for those five jobs. We were thinking of doing a database backup
> today, then delete those five jobs from the table, then run the SMW
> "repair and upgrade" via the admin special page.
>
> Even if this clears the job queue, we'd like to understand what caused
> this in the first place. I realize that's a very open-ended question
> :)
>
> Daren
>
>
> On Wed, Sep 24, 2014 at 4:30 PM, James HK
> <[hidden email]>
> wrote:
>
>> Hi,
>>
>> > We currently have five jobs that are "stuck". All of them have 1
>> > for job_attempts.
>> >
>> > One has job_cmd of refreshLinks in job namespace 10 and it is for a
>> > template page.
>> > The other four have job_cmd of SMW\UpdateJob in job namespace 0 and
>> > are
>> for
>> > "standard" pages. These pages do not seem to be related based on
>> category
>> > or template.
>>
>> Just to make sure that I interpret the meaning of "stuck" correctly,
>> after finishing `runJobs` those four jobs (five with the
>> `refreshLinks` jobs) are still visible in the job table with an
>> "job_attempts" of 1. When running `runJobs` again the same four
>> `SMW\UpdateJob` (same as in the same title and same Id) jobs are
>> executed and increment the "job_attempts" to 2?
>>
>> If you empty the job table and execute `runJobs` does the same five
>> jobs appear again after the run with "job_attempts" = 1?
>>
>> Cheers
>>
>> On 9/25/14, Daren Welsh <[hidden email]> wrote:
>> > We currently have five jobs that are "stuck". All of them have 1
>> > for job_attempts.
>> >
>> > One has job_cmd of refreshLinks in job namespace 10 and it is for a
>> > template page.
>> > The other four have job_cmd of SMW\UpdateJob in job namespace 0 and
>> > are
>> for
>> > "standard" pages. These pages do not seem to be related based on
>> category
>> > or template.
>> >
>> > On Wed, Sep 24, 2014 at 3:37 PM, James HK
>> > <[hidden email]
>> >
>> > wrote:
>> >
>> >> Hi,
>> >>
>> >> > runJobs.php will literally run forever. After the non-offending
>> >> > jobs are cleared it's easy to see which are the offenders. Thus
>> >> > far I think
>> all
>> >> > offenders have been of type SMW::UpdateJob.
>> >>
>> >> I don't think the problem is with the `SMW\UpdateJob` because it
>> >> does a simple "shallow update" of the store while the management
>> >> of job status (including how many attempts, id's etc.) are done by
>> >> the MW JobQueue (which has first change in 1.22 and then again in 1.23).
>> >>
>> >> It does beg the question whether all `SMW\UpdateJob`'s are "stuck"
>> >> or only certain jobs belonging to a group of pages or single page?
>> >>
>> >> > runJobs.php, but for some reason they keep attempting to run
>> >> > over and
>> >> over.
>> >>
>> >> How do you know that the same job is run over and over again
>> >> because based and above discussion ("job_attempts") a job with too
>> >> many attempts is retired after some time.
>> >>
>> >> If the same job is run over and over again, what is displayed for
>> >> the "job_attempts" counter?
>> >>
>> >> [0] went into SMW 2.0 to counteract any possible job duplicates
>> >> for the same `root title`.
>> >>
>> >> [0]
>> >> https://github.com/SemanticMediaWiki/SemanticMediaWiki/pull/307
>> >>
>> >> Cheers
>> >>
>> >> On 9/25/14, James Montalvo <[hidden email]> wrote:
>> >> > I'm not sure if this is related, but on my wiki I'm occasionally
>> >> > getting "stuck" jobs. I've only noticed this since upgrading to
>> >> > MW 1.23 and
>> SMW
>> >> 2.0
>> >> > from 1.22/1.8.0.5.
>> >> >
>> >> > What I mean by "stuck" is that the jobs don't get executed when
>> >> > I do runJobs.php, but for some reason they keep attempting to
>> >> > run over and
>> >> over.
>> >> > runJobs.php will literally run forever. After the non-offending
>> >> > jobs are cleared it's easy to see which are the offenders. Thus
>> >> > far I think
>> all
>> >> > offenders have been of type SMW::UpdateJob.
>> >> >
>> >> > Is there some way to debug runJobs.php so I can provide better info?
>> >> >
>> >> > --James
>> >> > On Sep 24, 2014 10:55 AM, "Yaron Koren" <[hidden email]> wrote:
>> >> >
>> >> >> I certainly hope so too - or that there's some other standard
>> >> >> way to get previously-attempted jobs to be run again. I only
>> >> >> know that I tried that SQL trick once, and it worked. Perhaps
>> >> >> this is another reason why
>> the
>> >> >> question should have instead been sent to the mediawiki-l
>> >> >> mailing list.
>> >> >> :)
>> >> >>
>> >> >> On Wed, Sep 24, 2014 at 11:35 AM, James HK <
>> >> [hidden email]>
>> >> >> wrote:
>> >> >>
>> >> >> > Hi,
>> >> >> >
>> >> >> > > column is greater than 0 for all the rows in the table; I
>> >> >> > > think
>> if
>> >> >> > > you
>> >> >> > just
>> >> >> > > go into the database and call something like "UPDATE job
>> >> >> > > SET
>> >> >> > job_attempts =
>> >> >> > > 0", they will get run again.
>> >> >> >
>> >> >> > In case this solves the issue, I sincerely hope there is a
>> different
>> >> >> > way (a more standard way) to reset the "job_attempts" field
>> >> >> > other than by using a SQL statement to manipulate the job
>> >> >> > table.
>> >> >> >
>> >> >> > Cheers
>> >> >> >
>> >> >> > On 9/25/14, Yaron Koren <[hidden email]> wrote:
>> >> >> > > Hi,
>> >> >> > >
>> >> >> > > I believe the issue is the "job_attempts" field in the "job"
>> >> >> > > table.
>> >> I
>> >> >> > > believe each job is only attempted a certain number of
>> >> >> > > times before MediaWiki basically just gives up and ignores
>> >> >> > > it. My guess is
>> that
>> >> >> > > that
>> >> >> > > column is greater than 0 for all the rows in the table; I
>> >> >> > > think
>> if
>> >> >> > > you
>> >> >> > just
>> >> >> > > go into the database and call something like "UPDATE job
>> >> >> > > SET
>> >> >> > job_attempts =
>> >> >> > > 0", they will get run again.
>> >> >> > >
>> >> >> > > -Yaron
>> >> >> > >
>> >> >> >
>> >> >>
>> >> >>
>> >> >>
>> >> >> --
>> >> >> WikiWorks · MediaWiki Consulting · http://wikiworks.com
>> >> >>
>> >> >>
>> >>
>> ---------------------------------------------------------------------
>> ---------
>> >> >> Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
>> >> >> Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI
>> >> >> DSS Reports Are you Audit-Ready for PCI DSS 3.0 Compliance?
>> >> >> Download White paper Comply to PCI DSS 3.0 Requirement 10 and
>> >> >> 11.5 with EventLog Analyzer
>> >> >>
>> >> >>
>> >>
>> http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg
>> .clktrk
>> >> >> _______________________________________________
>> >> >> Semediawiki-user mailing list
>> >> >> [hidden email]
>> >> >> https://lists.sourceforge.net/lists/listinfo/semediawiki-user
>> >> >>
>> >> >
>> >>
>> >>
>> >>
>> ---------------------------------------------------------------------
>> ---------
>> >> Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
>> >> Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS
>> Reports
>> >> Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White
>> >> paper Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog
>> >> Analyzer
>> >>
>> >>
>> http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg
>> .clktrk
>> >> _______________________________________________
>> >> Semediawiki-devel mailing list
>> >> [hidden email]
>> >> https://lists.sourceforge.net/lists/listinfo/semediawiki-devel
>> >>
>> >
>> >
>> >
>> > --
>> > __________________
>> > http://mixcloud.com/darenwelsh
>> > http://www.beatportfolio.com
>> >
>>
>
>
>
> --
> __________________
> http://mixcloud.com/darenwelsh
> http://www.beatportfolio.com
>
------------------------------------------------------------------------------
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
_______________________________________________
Semediawiki-user mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/semediawiki-user
------------------------------------------------------------------------------
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
_______________________________________________
Semediawiki-user mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/semediawiki-user
Reply | Threaded
Open this post in threaded view
|

Re: [SMW-devel] MediaWiki Job queue problem

Krabina Bernhard
our problem is not only that we cannot get the hanging jobs to run, but that even new jobs being created (by changing a template or CSV import with data transfer) will not run....

regards,
Bernhard

----- Ursprüngliche Mail -----

> I had the same issue a while ago after I restored a database.
> I figured it was because when I ran the backup there where jobs in the queue
> and after the restore they kept running over and over and etc.
>
> I ran another backup from my production database after I cleared the runjobs,
> and this worked out fine after.
>
>
>
> -----Original Message-----
> From: James Montalvo [mailto:[hidden email]]
> Sent: Wednesday, September 24, 2014 5:55 PM
> To: Daren Welsh
> Cc: Semantic MediaWiki developers; Yaron Koren; Semantic MediaWiki users
> Subject: Re: [Semediawiki-user] [SMW-devel] MediaWiki Job queue problem
>
> Daren and I work together, so we have the same issue. One thing to add: Run
> jobs never finishes. Those "stuck" jobs just keep repeating over and over.
>
> On Wed, Sep 24, 2014 at 4:51 PM, Daren Welsh <[hidden email]> wrote:
>
> > I have executed runJobs several times and the job_attempts remains at
> > 1 for those five jobs. We were thinking of doing a database backup
> > today, then delete those five jobs from the table, then run the SMW
> > "repair and upgrade" via the admin special page.
> >
> > Even if this clears the job queue, we'd like to understand what caused
> > this in the first place. I realize that's a very open-ended question
> > :)
> >
> > Daren
> >
> >
> > On Wed, Sep 24, 2014 at 4:30 PM, James HK
> > <[hidden email]>
> > wrote:
> >
> >> Hi,
> >>
> >> > We currently have five jobs that are "stuck". All of them have 1
> >> > for job_attempts.
> >> >
> >> > One has job_cmd of refreshLinks in job namespace 10 and it is for a
> >> > template page.
> >> > The other four have job_cmd of SMW\UpdateJob in job namespace 0 and
> >> > are
> >> for
> >> > "standard" pages. These pages do not seem to be related based on
> >> category
> >> > or template.
> >>
> >> Just to make sure that I interpret the meaning of "stuck" correctly,
> >> after finishing `runJobs` those four jobs (five with the
> >> `refreshLinks` jobs) are still visible in the job table with an
> >> "job_attempts" of 1. When running `runJobs` again the same four
> >> `SMW\UpdateJob` (same as in the same title and same Id) jobs are
> >> executed and increment the "job_attempts" to 2?
> >>
> >> If you empty the job table and execute `runJobs` does the same five
> >> jobs appear again after the run with "job_attempts" = 1?
> >>
> >> Cheers
> >>
> >> On 9/25/14, Daren Welsh <[hidden email]> wrote:
> >> > We currently have five jobs that are "stuck". All of them have 1
> >> > for job_attempts.
> >> >
> >> > One has job_cmd of refreshLinks in job namespace 10 and it is for a
> >> > template page.
> >> > The other four have job_cmd of SMW\UpdateJob in job namespace 0 and
> >> > are
> >> for
> >> > "standard" pages. These pages do not seem to be related based on
> >> category
> >> > or template.
> >> >
> >> > On Wed, Sep 24, 2014 at 3:37 PM, James HK
> >> > <[hidden email]
> >> >
> >> > wrote:
> >> >
> >> >> Hi,
> >> >>
> >> >> > runJobs.php will literally run forever. After the non-offending
> >> >> > jobs are cleared it's easy to see which are the offenders. Thus
> >> >> > far I think
> >> all
> >> >> > offenders have been of type SMW::UpdateJob.
> >> >>
> >> >> I don't think the problem is with the `SMW\UpdateJob` because it
> >> >> does a simple "shallow update" of the store while the management
> >> >> of job status (including how many attempts, id's etc.) are done by
> >> >> the MW JobQueue (which has first change in 1.22 and then again in
> >> >> 1.23).
> >> >>
> >> >> It does beg the question whether all `SMW\UpdateJob`'s are "stuck"
> >> >> or only certain jobs belonging to a group of pages or single page?
> >> >>
> >> >> > runJobs.php, but for some reason they keep attempting to run
> >> >> > over and
> >> >> over.
> >> >>
> >> >> How do you know that the same job is run over and over again
> >> >> because based and above discussion ("job_attempts") a job with too
> >> >> many attempts is retired after some time.
> >> >>
> >> >> If the same job is run over and over again, what is displayed for
> >> >> the "job_attempts" counter?
> >> >>
> >> >> [0] went into SMW 2.0 to counteract any possible job duplicates
> >> >> for the same `root title`.
> >> >>
> >> >> [0]
> >> >> https://github.com/SemanticMediaWiki/SemanticMediaWiki/pull/307
> >> >>
> >> >> Cheers
> >> >>
> >> >> On 9/25/14, James Montalvo <[hidden email]> wrote:
> >> >> > I'm not sure if this is related, but on my wiki I'm occasionally
> >> >> > getting "stuck" jobs. I've only noticed this since upgrading to
> >> >> > MW 1.23 and
> >> SMW
> >> >> 2.0
> >> >> > from 1.22/1.8.0.5.
> >> >> >
> >> >> > What I mean by "stuck" is that the jobs don't get executed when
> >> >> > I do runJobs.php, but for some reason they keep attempting to
> >> >> > run over and
> >> >> over.
> >> >> > runJobs.php will literally run forever. After the non-offending
> >> >> > jobs are cleared it's easy to see which are the offenders. Thus
> >> >> > far I think
> >> all
> >> >> > offenders have been of type SMW::UpdateJob.
> >> >> >
> >> >> > Is there some way to debug runJobs.php so I can provide better info?
> >> >> >
> >> >> > --James
> >> >> > On Sep 24, 2014 10:55 AM, "Yaron Koren" <[hidden email]> wrote:
> >> >> >
> >> >> >> I certainly hope so too - or that there's some other standard
> >> >> >> way to get previously-attempted jobs to be run again. I only
> >> >> >> know that I tried that SQL trick once, and it worked. Perhaps
> >> >> >> this is another reason why
> >> the
> >> >> >> question should have instead been sent to the mediawiki-l
> >> >> >> mailing list.
> >> >> >> :)
> >> >> >>
> >> >> >> On Wed, Sep 24, 2014 at 11:35 AM, James HK <
> >> >> [hidden email]>
> >> >> >> wrote:
> >> >> >>
> >> >> >> > Hi,
> >> >> >> >
> >> >> >> > > column is greater than 0 for all the rows in the table; I
> >> >> >> > > think
> >> if
> >> >> >> > > you
> >> >> >> > just
> >> >> >> > > go into the database and call something like "UPDATE job
> >> >> >> > > SET
> >> >> >> > job_attempts =
> >> >> >> > > 0", they will get run again.
> >> >> >> >
> >> >> >> > In case this solves the issue, I sincerely hope there is a
> >> different
> >> >> >> > way (a more standard way) to reset the "job_attempts" field
> >> >> >> > other than by using a SQL statement to manipulate the job
> >> >> >> > table.
> >> >> >> >
> >> >> >> > Cheers
> >> >> >> >
> >> >> >> > On 9/25/14, Yaron Koren <[hidden email]> wrote:
> >> >> >> > > Hi,
> >> >> >> > >
> >> >> >> > > I believe the issue is the "job_attempts" field in the "job"
> >> >> >> > > table.
> >> >> I
> >> >> >> > > believe each job is only attempted a certain number of
> >> >> >> > > times before MediaWiki basically just gives up and ignores
> >> >> >> > > it. My guess is
> >> that
> >> >> >> > > that
> >> >> >> > > column is greater than 0 for all the rows in the table; I
> >> >> >> > > think
> >> if
> >> >> >> > > you
> >> >> >> > just
> >> >> >> > > go into the database and call something like "UPDATE job
> >> >> >> > > SET
> >> >> >> > job_attempts =
> >> >> >> > > 0", they will get run again.
> >> >> >> > >
> >> >> >> > > -Yaron
> >> >> >> > >
> >> >> >> >
> >> >> >>
> >> >> >>
> >> >> >>
> >> >> >> --
> >> >> >> WikiWorks · MediaWiki Consulting · http://wikiworks.com
> >> >> >>
> >> >> >>
> >> >>
> >> ---------------------------------------------------------------------
> >> ---------
> >> >> >> Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
> >> >> >> Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI
> >> >> >> DSS Reports Are you Audit-Ready for PCI DSS 3.0 Compliance?
> >> >> >> Download White paper Comply to PCI DSS 3.0 Requirement 10 and
> >> >> >> 11.5 with EventLog Analyzer
> >> >> >>
> >> >> >>
> >> >>
> >> http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg
> >> .clktrk
> >> >> >> _______________________________________________
> >> >> >> Semediawiki-user mailing list
> >> >> >> [hidden email]
> >> >> >> https://lists.sourceforge.net/lists/listinfo/semediawiki-user
> >> >> >>
> >> >> >
> >> >>
> >> >>
> >> >>
> >> ---------------------------------------------------------------------
> >> ---------
> >> >> Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
> >> >> Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS
> >> Reports
> >> >> Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White
> >> >> paper Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog
> >> >> Analyzer
> >> >>
> >> >>
> >> http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg
> >> .clktrk
> >> >> _______________________________________________
> >> >> Semediawiki-devel mailing list
> >> >> [hidden email]
> >> >> https://lists.sourceforge.net/lists/listinfo/semediawiki-devel
> >> >>
> >> >
> >> >
> >> >
> >> > --
> >> > __________________
> >> > http://mixcloud.com/darenwelsh
> >> > http://www.beatportfolio.com
> >> >
> >>
> >
> >
> >
> > --
> > __________________
> > http://mixcloud.com/darenwelsh
> > http://www.beatportfolio.com
> >
> ------------------------------------------------------------------------------
> Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer Achieve PCI
> DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports Are you
> Audit-Ready for PCI DSS 3.0 Compliance? Download White paper Comply to PCI
> DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
> http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
> _______________________________________________
> Semediawiki-user mailing list
> [hidden email]
> https://lists.sourceforge.net/lists/listinfo/semediawiki-user
> ------------------------------------------------------------------------------
> Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
> Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
> Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
> Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
> http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
> _______________________________________________
> Semediawiki-user mailing list
> [hidden email]
> https://lists.sourceforge.net/lists/listinfo/semediawiki-user
>

------------------------------------------------------------------------------
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
_______________________________________________
Semediawiki-user mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/semediawiki-user
Reply | Threaded
Open this post in threaded view
|

Re: [SMW-devel] MediaWiki Job queue problem

Markus Krötzsch-2
In reply to this post by James HK
Hi again,

I have isolated a single specimen of an infinite job cycle on my wiki.
The details are attached (I hope the attachment makes it to the list as
well). In short, the loop is triggered by an update job on a redirect
page. For some reason, the update job creates another instance of a new
update job of the same page.

The SMW tables do not contain correct information about the redirect
page: it has data stored about itself and is not marked as a redirect. I
do not know if this is the cause of the problem or a side effect.
However, since the page was stored with a #REDIRECT on it, this regular
storing of the data should already have created correct data -- this
should not depend on any update job.

The file also contains a sample of a job queue that I had at first,
where one of the indestructible jobs has two more copies of itself. They
were never modified during my tests, but this might explain why a job
queue can get longer and longer in such a case (new job instances,
whereever they come from, are protected by their indestructible copies).

The wrong SMW data might be the deeper issue here, but in any case it
should be able to make the UpdateJob execution robust against this kind
of cycle to address the main problem. Maybe the UpdateJob would (if
successful) actually fix the data, though it can hardly be its cause.

Regards,

Markus


On 25.09.2014 00:02, James HK wrote:

> Hi,
>
>> I have executed runJobs several times and the job_attempts remains at 1 for
>> those five jobs. We were thinking of doing a database backup today, then
>
> I'm curious about the "job_attempts" field as I would have expected to
> see an increment for when the job (actually there has been an attempt
> to execute and not only display a line on command shell) and to see
> whether the job actually gets execute when running `runJobs`, just add
> a simple `var_dump( 'hello world' )` line to [0] and verify a
> `SMW\UpdateJob` activity.
>
> [0] https://github.com/SemanticMediaWiki/SemanticMediaWiki/blob/master/includes/src/MediaWiki/Jobs/UpdateJob.php#L118
>
> Cheers
>
> On 9/25/14, Daren Welsh <[hidden email]> wrote:
>> I have executed runJobs several times and the job_attempts remains at 1 for
>> those five jobs. We were thinking of doing a database backup today, then
>> delete those five jobs from the table, then run the SMW "repair and
>> upgrade" via the admin special page.
>>
>> Even if this clears the job queue, we'd like to understand what caused this
>> in the first place. I realize that's a very open-ended question :)
>>
>> Daren
>>
>>
>> On Wed, Sep 24, 2014 at 4:30 PM, James HK <[hidden email]>
>> wrote:
>>
>>> Hi,
>>>
>>>> We currently have five jobs that are "stuck". All of them have 1 for
>>>> job_attempts.
>>>>
>>>> One has job_cmd of refreshLinks in job namespace 10 and it is for a
>>>> template page.
>>>> The other four have job_cmd of SMW\UpdateJob in job namespace 0 and are
>>> for
>>>> "standard" pages. These pages do not seem to be related based on
>>>> category
>>>> or template.
>>>
>>> Just to make sure that I interpret the meaning of "stuck" correctly,
>>> after finishing `runJobs` those four jobs (five with the
>>> `refreshLinks` jobs) are still visible in the job table with an
>>> "job_attempts" of 1. When running `runJobs` again the same four
>>> `SMW\UpdateJob` (same as in the same title and same Id) jobs are
>>> executed and increment the "job_attempts" to 2?
>>>
>>> If you empty the job table and execute `runJobs` does the same five
>>> jobs appear again after the run with "job_attempts" = 1?
>>>
>>> Cheers
>>>
>>> On 9/25/14, Daren Welsh <[hidden email]> wrote:
>>>> We currently have five jobs that are "stuck". All of them have 1 for
>>>> job_attempts.
>>>>
>>>> One has job_cmd of refreshLinks in job namespace 10 and it is for a
>>>> template page.
>>>> The other four have job_cmd of SMW\UpdateJob in job namespace 0 and are
>>> for
>>>> "standard" pages. These pages do not seem to be related based on
>>>> category
>>>> or template.
>>>>
>>>> On Wed, Sep 24, 2014 at 3:37 PM, James HK
>>>> <[hidden email]>
>>>> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>>> runJobs.php will literally run forever. After the non-offending jobs
>>>>>> are
>>>>>> cleared it's easy to see which are the offenders. Thus far I think
>>>>>> all
>>>>>> offenders have been of type SMW::UpdateJob.
>>>>>
>>>>> I don't think the problem is with the `SMW\UpdateJob` because it does
>>>>> a simple "shallow update" of the store while the management of job
>>>>> status (including how many attempts, id's etc.) are done by the MW
>>>>> JobQueue (which has first change in 1.22 and then again in 1.23).
>>>>>
>>>>> It does beg the question whether all `SMW\UpdateJob`'s are "stuck" or
>>>>> only certain jobs belonging to a group of pages or single page?
>>>>>
>>>>>> runJobs.php, but for some reason they keep attempting to run over
>>>>>> and
>>>>> over.
>>>>>
>>>>> How do you know that the same job is run over and over again because
>>>>> based and above discussion ("job_attempts") a job with too many
>>>>> attempts is retired after some time.
>>>>>
>>>>> If the same job is run over and over again, what is displayed for the
>>>>> "job_attempts" counter?
>>>>>
>>>>> [0] went into SMW 2.0 to counteract any possible job duplicates for
>>>>> the same `root title`.
>>>>>
>>>>> [0] https://github.com/SemanticMediaWiki/SemanticMediaWiki/pull/307
>>>>>
>>>>> Cheers
>>>>>
>>>>> On 9/25/14, James Montalvo <[hidden email]> wrote:
>>>>>> I'm not sure if this is related, but on my wiki I'm occasionally
>>>>>> getting
>>>>>> "stuck" jobs. I've only noticed this since upgrading to MW 1.23 and
>>> SMW
>>>>> 2.0
>>>>>> from 1.22/1.8.0.5.
>>>>>>
>>>>>> What I mean by "stuck" is that the jobs don't get executed when I do
>>>>>> runJobs.php, but for some reason they keep attempting to run over
>>>>>> and
>>>>> over.
>>>>>> runJobs.php will literally run forever. After the non-offending jobs
>>>>>> are
>>>>>> cleared it's easy to see which are the offenders. Thus far I think
>>>>>> all
>>>>>> offenders have been of type SMW::UpdateJob.
>>>>>>
>>>>>> Is there some way to debug runJobs.php so I can provide better info?
>>>>>>
>>>>>> --James
>>>>>> On Sep 24, 2014 10:55 AM, "Yaron Koren" <[hidden email]> wrote:
>>>>>>
>>>>>>> I certainly hope so too - or that there's some other standard way
>>>>>>> to
>>>>>>> get
>>>>>>> previously-attempted jobs to be run again. I only know that I tried
>>>>>>> that
>>>>>>> SQL trick once, and it worked. Perhaps this is another reason why
>>>>>>> the
>>>>>>> question should have instead been sent to the mediawiki-l mailing
>>>>>>> list.
>>>>>>> :)
>>>>>>>
>>>>>>> On Wed, Sep 24, 2014 at 11:35 AM, James HK <
>>>>> [hidden email]>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>>> column is greater than 0 for all the rows in the table; I think
>>> if
>>>>>>>>> you
>>>>>>>> just
>>>>>>>>> go into the database and call something like "UPDATE job SET
>>>>>>>> job_attempts =
>>>>>>>>> 0", they will get run again.
>>>>>>>>
>>>>>>>> In case this solves the issue, I sincerely hope there is a
>>> different
>>>>>>>> way (a more standard way) to reset the "job_attempts" field other
>>>>>>>> than
>>>>>>>> by using a SQL statement to manipulate the job table.
>>>>>>>>
>>>>>>>> Cheers
>>>>>>>>
>>>>>>>> On 9/25/14, Yaron Koren <[hidden email]> wrote:
>>>>>>>>> Hi,
>>>>>>>>>
>>>>>>>>> I believe the issue is the "job_attempts" field in the "job"
>>>>>>>>> table.
>>>>> I
>>>>>>>>> believe each job is only attempted a certain number of times
>>>>>>>>> before
>>>>>>>>> MediaWiki basically just gives up and ignores it. My guess is
>>> that
>>>>>>>>> that
>>>>>>>>> column is greater than 0 for all the rows in the table; I think
>>> if
>>>>>>>>> you
>>>>>>>> just
>>>>>>>>> go into the database and call something like "UPDATE job SET
>>>>>>>> job_attempts =
>>>>>>>>> 0", they will get run again.
>>>>>>>>>
>>>>>>>>> -Yaron
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> WikiWorks · MediaWiki Consulting · http://wikiworks.com
>>>>>>>
>>>>>>>
>>>>>
>>> ------------------------------------------------------------------------------
>>>>>>> Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
>>>>>>> Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS
>>>>>>> Reports
>>>>>>> Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White
>>>>>>> paper
>>>>>>> Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog
>>>>>>> Analyzer
>>>>>>>
>>>>>>>
>>>>>
>>> http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
>>>>>>> _______________________________________________
>>>>>>> Semediawiki-user mailing list
>>>>>>> [hidden email]
>>>>>>> https://lists.sourceforge.net/lists/listinfo/semediawiki-user
>>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>>
>>> ------------------------------------------------------------------------------
>>>>> Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
>>>>> Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS
>>>>> Reports
>>>>> Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
>>>>> Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
>>>>>
>>>>>
>>> http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
>>>>> _______________________________________________
>>>>> Semediawiki-devel mailing list
>>>>> [hidden email]
>>>>> https://lists.sourceforge.net/lists/listinfo/semediawiki-devel
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> __________________
>>>> http://mixcloud.com/darenwelsh
>>>> http://www.beatportfolio.com
>>>>
>>>
>>
>>
>>
>> --
>> __________________
>> http://mixcloud.com/darenwelsh
>> http://www.beatportfolio.com
>>
>
> ------------------------------------------------------------------------------
> Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
> Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
> Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
> Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
> http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
> _______________________________________________
> Semediawiki-devel mailing list
> [hidden email]
> https://lists.sourceforge.net/lists/listinfo/semediawiki-devel
>

------------------------------------------------------------------------------
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://p.sf.net/sfu/Zoho
_______________________________________________
Semediawiki-user mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/semediawiki-user

smw-job-cycle-bug-1.txt (19K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: [SMW-devel] MediaWiki Job queue problem

Markus Krötzsch-2
P.S. I forgot to mention that editing the redirect page again fixed some
data in SMW (no more property values stored) but it still did not set
the redirect marker set in smw_object_ids did not store the redirect in
smw_fpt_redi. I did not test if this update might have stopped the
cycle, but I doubt it (the earlier presence of spurious properties about
illegal property values should not have affected the jobs). -- Markus

On 13.10.2014 11:29, Markus Krötzsch wrote:

> Hi again,
>
> I have isolated a single specimen of an infinite job cycle on my wiki.
> The details are attached (I hope the attachment makes it to the list as
> well). In short, the loop is triggered by an update job on a redirect
> page. For some reason, the update job creates another instance of a new
> update job of the same page.
>
> The SMW tables do not contain correct information about the redirect
> page: it has data stored about itself and is not marked as a redirect. I
> do not know if this is the cause of the problem or a side effect.
> However, since the page was stored with a #REDIRECT on it, this regular
> storing of the data should already have created correct data -- this
> should not depend on any update job.
>
> The file also contains a sample of a job queue that I had at first,
> where one of the indestructible jobs has two more copies of itself. They
> were never modified during my tests, but this might explain why a job
> queue can get longer and longer in such a case (new job instances,
> whereever they come from, are protected by their indestructible copies).
>
> The wrong SMW data might be the deeper issue here, but in any case it
> should be able to make the UpdateJob execution robust against this kind
> of cycle to address the main problem. Maybe the UpdateJob would (if
> successful) actually fix the data, though it can hardly be its cause.
>
> Regards,
>
> Markus
>
>
> On 25.09.2014 00:02, James HK wrote:
>> Hi,
>>
>>> I have executed runJobs several times and the job_attempts remains at
>>> 1 for
>>> those five jobs. We were thinking of doing a database backup today, then
>>
>> I'm curious about the "job_attempts" field as I would have expected to
>> see an increment for when the job (actually there has been an attempt
>> to execute and not only display a line on command shell) and to see
>> whether the job actually gets execute when running `runJobs`, just add
>> a simple `var_dump( 'hello world' )` line to [0] and verify a
>> `SMW\UpdateJob` activity.
>>
>> [0]
>> https://github.com/SemanticMediaWiki/SemanticMediaWiki/blob/master/includes/src/MediaWiki/Jobs/UpdateJob.php#L118
>>
>>
>> Cheers
>>
>> On 9/25/14, Daren Welsh <[hidden email]> wrote:
>>> I have executed runJobs several times and the job_attempts remains at
>>> 1 for
>>> those five jobs. We were thinking of doing a database backup today, then
>>> delete those five jobs from the table, then run the SMW "repair and
>>> upgrade" via the admin special page.
>>>
>>> Even if this clears the job queue, we'd like to understand what
>>> caused this
>>> in the first place. I realize that's a very open-ended question :)
>>>
>>> Daren
>>>
>>>
>>> On Wed, Sep 24, 2014 at 4:30 PM, James HK <[hidden email]>
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>>> We currently have five jobs that are "stuck". All of them have 1 for
>>>>> job_attempts.
>>>>>
>>>>> One has job_cmd of refreshLinks in job namespace 10 and it is for a
>>>>> template page.
>>>>> The other four have job_cmd of SMW\UpdateJob in job namespace 0 and
>>>>> are
>>>> for
>>>>> "standard" pages. These pages do not seem to be related based on
>>>>> category
>>>>> or template.
>>>>
>>>> Just to make sure that I interpret the meaning of "stuck" correctly,
>>>> after finishing `runJobs` those four jobs (five with the
>>>> `refreshLinks` jobs) are still visible in the job table with an
>>>> "job_attempts" of 1. When running `runJobs` again the same four
>>>> `SMW\UpdateJob` (same as in the same title and same Id) jobs are
>>>> executed and increment the "job_attempts" to 2?
>>>>
>>>> If you empty the job table and execute `runJobs` does the same five
>>>> jobs appear again after the run with "job_attempts" = 1?
>>>>
>>>> Cheers
>>>>
>>>> On 9/25/14, Daren Welsh <[hidden email]> wrote:
>>>>> We currently have five jobs that are "stuck". All of them have 1 for
>>>>> job_attempts.
>>>>>
>>>>> One has job_cmd of refreshLinks in job namespace 10 and it is for a
>>>>> template page.
>>>>> The other four have job_cmd of SMW\UpdateJob in job namespace 0 and
>>>>> are
>>>> for
>>>>> "standard" pages. These pages do not seem to be related based on
>>>>> category
>>>>> or template.
>>>>>
>>>>> On Wed, Sep 24, 2014 at 3:37 PM, James HK
>>>>> <[hidden email]>
>>>>> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>>> runJobs.php will literally run forever. After the non-offending jobs
>>>>>>> are
>>>>>>> cleared it's easy to see which are the offenders. Thus far I think
>>>>>>> all
>>>>>>> offenders have been of type SMW::UpdateJob.
>>>>>>
>>>>>> I don't think the problem is with the `SMW\UpdateJob` because it does
>>>>>> a simple "shallow update" of the store while the management of job
>>>>>> status (including how many attempts, id's etc.) are done by the MW
>>>>>> JobQueue (which has first change in 1.22 and then again in 1.23).
>>>>>>
>>>>>> It does beg the question whether all `SMW\UpdateJob`'s are "stuck" or
>>>>>> only certain jobs belonging to a group of pages or single page?
>>>>>>
>>>>>>> runJobs.php, but for some reason they keep attempting to run over
>>>>>>> and
>>>>>> over.
>>>>>>
>>>>>> How do you know that the same job is run over and over again because
>>>>>> based and above discussion ("job_attempts") a job with too many
>>>>>> attempts is retired after some time.
>>>>>>
>>>>>> If the same job is run over and over again, what is displayed for the
>>>>>> "job_attempts" counter?
>>>>>>
>>>>>> [0] went into SMW 2.0 to counteract any possible job duplicates for
>>>>>> the same `root title`.
>>>>>>
>>>>>> [0] https://github.com/SemanticMediaWiki/SemanticMediaWiki/pull/307
>>>>>>
>>>>>> Cheers
>>>>>>
>>>>>> On 9/25/14, James Montalvo <[hidden email]> wrote:
>>>>>>> I'm not sure if this is related, but on my wiki I'm occasionally
>>>>>>> getting
>>>>>>> "stuck" jobs. I've only noticed this since upgrading to MW 1.23 and
>>>> SMW
>>>>>> 2.0
>>>>>>> from 1.22/1.8.0.5.
>>>>>>>
>>>>>>> What I mean by "stuck" is that the jobs don't get executed when I do
>>>>>>> runJobs.php, but for some reason they keep attempting to run over
>>>>>>> and
>>>>>> over.
>>>>>>> runJobs.php will literally run forever. After the non-offending jobs
>>>>>>> are
>>>>>>> cleared it's easy to see which are the offenders. Thus far I think
>>>>>>> all
>>>>>>> offenders have been of type SMW::UpdateJob.
>>>>>>>
>>>>>>> Is there some way to debug runJobs.php so I can provide better info?
>>>>>>>
>>>>>>> --James
>>>>>>> On Sep 24, 2014 10:55 AM, "Yaron Koren" <[hidden email]> wrote:
>>>>>>>
>>>>>>>> I certainly hope so too - or that there's some other standard way
>>>>>>>> to
>>>>>>>> get
>>>>>>>> previously-attempted jobs to be run again. I only know that I tried
>>>>>>>> that
>>>>>>>> SQL trick once, and it worked. Perhaps this is another reason why
>>>>>>>> the
>>>>>>>> question should have instead been sent to the mediawiki-l mailing
>>>>>>>> list.
>>>>>>>> :)
>>>>>>>>
>>>>>>>> On Wed, Sep 24, 2014 at 11:35 AM, James HK <
>>>>>> [hidden email]>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Hi,
>>>>>>>>>
>>>>>>>>>> column is greater than 0 for all the rows in the table; I think
>>>> if
>>>>>>>>>> you
>>>>>>>>> just
>>>>>>>>>> go into the database and call something like "UPDATE job SET
>>>>>>>>> job_attempts =
>>>>>>>>>> 0", they will get run again.
>>>>>>>>>
>>>>>>>>> In case this solves the issue, I sincerely hope there is a
>>>> different
>>>>>>>>> way (a more standard way) to reset the "job_attempts" field other
>>>>>>>>> than
>>>>>>>>> by using a SQL statement to manipulate the job table.
>>>>>>>>>
>>>>>>>>> Cheers
>>>>>>>>>
>>>>>>>>> On 9/25/14, Yaron Koren <[hidden email]> wrote:
>>>>>>>>>> Hi,
>>>>>>>>>>
>>>>>>>>>> I believe the issue is the "job_attempts" field in the "job"
>>>>>>>>>> table.
>>>>>> I
>>>>>>>>>> believe each job is only attempted a certain number of times
>>>>>>>>>> before
>>>>>>>>>> MediaWiki basically just gives up and ignores it. My guess is
>>>> that
>>>>>>>>>> that
>>>>>>>>>> column is greater than 0 for all the rows in the table; I think
>>>> if
>>>>>>>>>> you
>>>>>>>>> just
>>>>>>>>>> go into the database and call something like "UPDATE job SET
>>>>>>>>> job_attempts =
>>>>>>>>>> 0", they will get run again.
>>>>>>>>>>
>>>>>>>>>> -Yaron
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> WikiWorks · MediaWiki Consulting · http://wikiworks.com
>>>>>>>>
>>>>>>>>
>>>>>>
>>>> ------------------------------------------------------------------------------
>>>>
>>>>>>>> Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
>>>>>>>> Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS
>>>>>>>> Reports
>>>>>>>> Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White
>>>>>>>> paper
>>>>>>>> Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog
>>>>>>>> Analyzer
>>>>>>>>
>>>>>>>>
>>>>>>
>>>> http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
>>>>
>>>>>>>> _______________________________________________
>>>>>>>> Semediawiki-user mailing list
>>>>>>>> [hidden email]
>>>>>>>> https://lists.sourceforge.net/lists/listinfo/semediawiki-user
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>> ------------------------------------------------------------------------------
>>>>
>>>>>> Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
>>>>>> Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS
>>>>>> Reports
>>>>>> Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
>>>>>> Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
>>>>>>
>>>>>>
>>>> http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
>>>>
>>>>>> _______________________________________________
>>>>>> Semediawiki-devel mailing list
>>>>>> [hidden email]
>>>>>> https://lists.sourceforge.net/lists/listinfo/semediawiki-devel
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> __________________
>>>>> http://mixcloud.com/darenwelsh
>>>>> http://www.beatportfolio.com
>>>>>
>>>>
>>>
>>>
>>>
>>> --
>>> __________________
>>> http://mixcloud.com/darenwelsh
>>> http://www.beatportfolio.com
>>>
>>
>> ------------------------------------------------------------------------------
>>
>> Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
>> Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
>> Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
>> Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
>> http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
>>
>> _______________________________________________
>> Semediawiki-devel mailing list
>> [hidden email]
>> https://lists.sourceforge.net/lists/listinfo/semediawiki-devel
>>
>


------------------------------------------------------------------------------
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://p.sf.net/sfu/Zoho
_______________________________________________
Semediawiki-user mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/semediawiki-user
Reply | Threaded
Open this post in threaded view
|

Re: [SMW-devel] MediaWiki Job queue problem

James Montalvo
Testing I performed on our production wiki is in line with what you've
found, Markus. Two redirect pages were the major culprits:

1) The "L. Shore" page
  * Generated up to 87 jobs each time it ran
  * Has two revisions: 1st with page content, 2nd when the page was edited
to redirect to another page

2) The "SSU/PMM Prep EVA" page
  * Generated up to 16 jobs each time it ran
  * Has one revision, from when the page (which had a long history) was
moved to another title
  * Please note that there has been a lot of indecision with this
particular page's naming. It has been both a content page and a redirect
page, with at least four different names.

I have data similar to the file Markus attached, but I'd rather not attach
it publicly. If anyone needs it please ask me. Also it's HTML format and I
bet the mailing list blocks that as an attachment...

--James

On Mon, Oct 13, 2014 at 4:35 AM, Markus Krötzsch <
[hidden email]> wrote:

> P.S. I forgot to mention that editing the redirect page again fixed some
> data in SMW (no more property values stored) but it still did not set the
> redirect marker set in smw_object_ids did not store the redirect in
> smw_fpt_redi. I did not test if this update might have stopped the cycle,
> but I doubt it (the earlier presence of spurious properties about illegal
> property values should not have affected the jobs). -- Markus
>
>
> On 13.10.2014 11:29, Markus Krötzsch wrote:
>
>> Hi again,
>>
>> I have isolated a single specimen of an infinite job cycle on my wiki.
>> The details are attached (I hope the attachment makes it to the list as
>> well). In short, the loop is triggered by an update job on a redirect
>> page. For some reason, the update job creates another instance of a new
>> update job of the same page.
>>
>> The SMW tables do not contain correct information about the redirect
>> page: it has data stored about itself and is not marked as a redirect. I
>> do not know if this is the cause of the problem or a side effect.
>> However, since the page was stored with a #REDIRECT on it, this regular
>> storing of the data should already have created correct data -- this
>> should not depend on any update job.
>>
>> The file also contains a sample of a job queue that I had at first,
>> where one of the indestructible jobs has two more copies of itself. They
>> were never modified during my tests, but this might explain why a job
>> queue can get longer and longer in such a case (new job instances,
>> whereever they come from, are protected by their indestructible copies).
>>
>> The wrong SMW data might be the deeper issue here, but in any case it
>> should be able to make the UpdateJob execution robust against this kind
>> of cycle to address the main problem. Maybe the UpdateJob would (if
>> successful) actually fix the data, though it can hardly be its cause.
>>
>> Regards,
>>
>> Markus
>>
>>
>> On 25.09.2014 00:02, James HK wrote:
>>
>>> Hi,
>>>
>>>  I have executed runJobs several times and the job_attempts remains at
>>>> 1 for
>>>> those five jobs. We were thinking of doing a database backup today, then
>>>>
>>>
>>> I'm curious about the "job_attempts" field as I would have expected to
>>> see an increment for when the job (actually there has been an attempt
>>> to execute and not only display a line on command shell) and to see
>>> whether the job actually gets execute when running `runJobs`, just add
>>> a simple `var_dump( 'hello world' )` line to [0] and verify a
>>> `SMW\UpdateJob` activity.
>>>
>>> [0]
>>> https://github.com/SemanticMediaWiki/SemanticMediaWiki/blob/master/
>>> includes/src/MediaWiki/Jobs/UpdateJob.php#L118
>>>
>>>
>>> Cheers
>>>
>>> On 9/25/14, Daren Welsh <[hidden email]> wrote:
>>>
>>>> I have executed runJobs several times and the job_attempts remains at
>>>> 1 for
>>>> those five jobs. We were thinking of doing a database backup today, then
>>>> delete those five jobs from the table, then run the SMW "repair and
>>>> upgrade" via the admin special page.
>>>>
>>>> Even if this clears the job queue, we'd like to understand what
>>>> caused this
>>>> in the first place. I realize that's a very open-ended question :)
>>>>
>>>> Daren
>>>>
>>>>
>>>> On Wed, Sep 24, 2014 at 4:30 PM, James HK <[hidden email]
>>>> >
>>>> wrote:
>>>>
>>>>  Hi,
>>>>>
>>>>>  We currently have five jobs that are "stuck". All of them have 1 for
>>>>>> job_attempts.
>>>>>>
>>>>>> One has job_cmd of refreshLinks in job namespace 10 and it is for a
>>>>>> template page.
>>>>>> The other four have job_cmd of SMW\UpdateJob in job namespace 0 and
>>>>>> are
>>>>>>
>>>>> for
>>>>>
>>>>>> "standard" pages. These pages do not seem to be related based on
>>>>>> category
>>>>>> or template.
>>>>>>
>>>>>
>>>>> Just to make sure that I interpret the meaning of "stuck" correctly,
>>>>> after finishing `runJobs` those four jobs (five with the
>>>>> `refreshLinks` jobs) are still visible in the job table with an
>>>>> "job_attempts" of 1. When running `runJobs` again the same four
>>>>> `SMW\UpdateJob` (same as in the same title and same Id) jobs are
>>>>> executed and increment the "job_attempts" to 2?
>>>>>
>>>>> If you empty the job table and execute `runJobs` does the same five
>>>>> jobs appear again after the run with "job_attempts" = 1?
>>>>>
>>>>> Cheers
>>>>>
>>>>> On 9/25/14, Daren Welsh <[hidden email]> wrote:
>>>>>
>>>>>> We currently have five jobs that are "stuck". All of them have 1 for
>>>>>> job_attempts.
>>>>>>
>>>>>> One has job_cmd of refreshLinks in job namespace 10 and it is for a
>>>>>> template page.
>>>>>> The other four have job_cmd of SMW\UpdateJob in job namespace 0 and
>>>>>> are
>>>>>>
>>>>> for
>>>>>
>>>>>> "standard" pages. These pages do not seem to be related based on
>>>>>> category
>>>>>> or template.
>>>>>>
>>>>>> On Wed, Sep 24, 2014 at 3:37 PM, James HK
>>>>>> <[hidden email]>
>>>>>> wrote:
>>>>>>
>>>>>>  Hi,
>>>>>>>
>>>>>>>  runJobs.php will literally run forever. After the non-offending jobs
>>>>>>>> are
>>>>>>>> cleared it's easy to see which are the offenders. Thus far I think
>>>>>>>> all
>>>>>>>> offenders have been of type SMW::UpdateJob.
>>>>>>>>
>>>>>>>
>>>>>>> I don't think the problem is with the `SMW\UpdateJob` because it does
>>>>>>> a simple "shallow update" of the store while the management of job
>>>>>>> status (including how many attempts, id's etc.) are done by the MW
>>>>>>> JobQueue (which has first change in 1.22 and then again in 1.23).
>>>>>>>
>>>>>>> It does beg the question whether all `SMW\UpdateJob`'s are "stuck" or
>>>>>>> only certain jobs belonging to a group of pages or single page?
>>>>>>>
>>>>>>>  runJobs.php, but for some reason they keep attempting to run over
>>>>>>>> and
>>>>>>>>
>>>>>>> over.
>>>>>>>
>>>>>>> How do you know that the same job is run over and over again because
>>>>>>> based and above discussion ("job_attempts") a job with too many
>>>>>>> attempts is retired after some time.
>>>>>>>
>>>>>>> If the same job is run over and over again, what is displayed for the
>>>>>>> "job_attempts" counter?
>>>>>>>
>>>>>>> [0] went into SMW 2.0 to counteract any possible job duplicates for
>>>>>>> the same `root title`.
>>>>>>>
>>>>>>> [0] https://github.com/SemanticMediaWiki/SemanticMediaWiki/pull/307
>>>>>>>
>>>>>>> Cheers
>>>>>>>
>>>>>>> On 9/25/14, James Montalvo <[hidden email]> wrote:
>>>>>>>
>>>>>>>> I'm not sure if this is related, but on my wiki I'm occasionally
>>>>>>>> getting
>>>>>>>> "stuck" jobs. I've only noticed this since upgrading to MW 1.23 and
>>>>>>>>
>>>>>>> SMW
>>>>>
>>>>>> 2.0
>>>>>>>
>>>>>>>> from 1.22/1.8.0.5.
>>>>>>>>
>>>>>>>> What I mean by "stuck" is that the jobs don't get executed when I do
>>>>>>>> runJobs.php, but for some reason they keep attempting to run over
>>>>>>>> and
>>>>>>>>
>>>>>>> over.
>>>>>>>
>>>>>>>> runJobs.php will literally run forever. After the non-offending jobs
>>>>>>>> are
>>>>>>>> cleared it's easy to see which are the offenders. Thus far I think
>>>>>>>> all
>>>>>>>> offenders have been of type SMW::UpdateJob.
>>>>>>>>
>>>>>>>> Is there some way to debug runJobs.php so I can provide better info?
>>>>>>>>
>>>>>>>> --James
>>>>>>>> On Sep 24, 2014 10:55 AM, "Yaron Koren" <[hidden email]>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>  I certainly hope so too - or that there's some other standard way
>>>>>>>>> to
>>>>>>>>> get
>>>>>>>>> previously-attempted jobs to be run again. I only know that I tried
>>>>>>>>> that
>>>>>>>>> SQL trick once, and it worked. Perhaps this is another reason why
>>>>>>>>> the
>>>>>>>>> question should have instead been sent to the mediawiki-l mailing
>>>>>>>>> list.
>>>>>>>>> :)
>>>>>>>>>
>>>>>>>>> On Wed, Sep 24, 2014 at 11:35 AM, James HK <
>>>>>>>>>
>>>>>>>> [hidden email]>
>>>>>>>
>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>  Hi,
>>>>>>>>>>
>>>>>>>>>>  column is greater than 0 for all the rows in the table; I think
>>>>>>>>>>>
>>>>>>>>>> if
>>>>>
>>>>>> you
>>>>>>>>>>>
>>>>>>>>>> just
>>>>>>>>>>
>>>>>>>>>>> go into the database and call something like "UPDATE job SET
>>>>>>>>>>>
>>>>>>>>>> job_attempts =
>>>>>>>>>>
>>>>>>>>>>> 0", they will get run again.
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> In case this solves the issue, I sincerely hope there is a
>>>>>>>>>>
>>>>>>>>> different
>>>>>
>>>>>> way (a more standard way) to reset the "job_attempts" field other
>>>>>>>>>> than
>>>>>>>>>> by using a SQL statement to manipulate the job table.
>>>>>>>>>>
>>>>>>>>>> Cheers
>>>>>>>>>>
>>>>>>>>>> On 9/25/14, Yaron Koren <[hidden email]> wrote:
>>>>>>>>>>
>>>>>>>>>>> Hi,
>>>>>>>>>>>
>>>>>>>>>>> I believe the issue is the "job_attempts" field in the "job"
>>>>>>>>>>> table.
>>>>>>>>>>>
>>>>>>>>>> I
>>>>>>>
>>>>>>>> believe each job is only attempted a certain number of times
>>>>>>>>>>> before
>>>>>>>>>>> MediaWiki basically just gives up and ignores it. My guess is
>>>>>>>>>>>
>>>>>>>>>> that
>>>>>
>>>>>> that
>>>>>>>>>>> column is greater than 0 for all the rows in the table; I think
>>>>>>>>>>>
>>>>>>>>>> if
>>>>>
>>>>>> you
>>>>>>>>>>>
>>>>>>>>>> just
>>>>>>>>>>
>>>>>>>>>>> go into the database and call something like "UPDATE job SET
>>>>>>>>>>>
>>>>>>>>>> job_attempts =
>>>>>>>>>>
>>>>>>>>>>> 0", they will get run again.
>>>>>>>>>>>
>>>>>>>>>>> -Yaron
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> WikiWorks · MediaWiki Consulting · http://wikiworks.com
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>  ------------------------------------------------------------
>>>>> ------------------
>>>>>
>>>>>  Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
>>>>>>>>> Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS
>>>>>>>>> Reports
>>>>>>>>> Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White
>>>>>>>>> paper
>>>>>>>>> Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog
>>>>>>>>> Analyzer
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>  http://pubads.g.doubleclick.net/gampad/clk?id=154622311&
>>>>> iu=/4140/ostg.clktrk
>>>>>
>>>>>  _______________________________________________
>>>>>>>>> Semediawiki-user mailing list
>>>>>>>>> [hidden email]
>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/semediawiki-user
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>  ------------------------------------------------------------
>>>>> ------------------
>>>>>
>>>>>  Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
>>>>>>> Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS
>>>>>>> Reports
>>>>>>> Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
>>>>>>> Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
>>>>>>>
>>>>>>>
>>>>>>>  http://pubads.g.doubleclick.net/gampad/clk?id=154622311&
>>>>> iu=/4140/ostg.clktrk
>>>>>
>>>>>  _______________________________________________
>>>>>>> Semediawiki-devel mailing list
>>>>>>> [hidden email]
>>>>>>> https://lists.sourceforge.net/lists/listinfo/semediawiki-devel
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> __________________
>>>>>> http://mixcloud.com/darenwelsh
>>>>>> http://www.beatportfolio.com
>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> __________________
>>>> http://mixcloud.com/darenwelsh
>>>> http://www.beatportfolio.com
>>>>
>>>>
>>> ------------------------------------------------------------
>>> ------------------
>>>
>>> Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
>>> Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
>>> Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
>>> Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
>>> http://pubads.g.doubleclick.net/gampad/clk?id=154622311&
>>> iu=/4140/ostg.clktrk
>>>
>>> _______________________________________________
>>> Semediawiki-devel mailing list
>>> [hidden email]
>>> https://lists.sourceforge.net/lists/listinfo/semediawiki-devel
>>>
>>>
>>
>
------------------------------------------------------------------------------
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://p.sf.net/sfu/Zoho
_______________________________________________
Semediawiki-user mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/semediawiki-user