[RFC/Summit] `npm install mediawiki-express`

classic Classic list List threaded Threaded
18 messages Options
Reply | Threaded
Open this post in threaded view
|

[RFC/Summit] `npm install mediawiki-express`

C. Scott Ananian
Architecturally it may be desirable to factor our codebase into multiple
independent services with clear APIs, but small wikis would clearly like a
"single server" installation with all of the services running under one
roof, as it were. Some options previously proposed have involved VM
containers that bundle PHP, Node, MediaWiki and all required services into
a preconfigured full system image. (T87774
<https://phabricator.wikimedia.org/T87774>)

This summit topic/RFC proposes an alternative: tightly integrating PHP/HHVM
with a persistent server process running under node.js. The central service
bundles together multiple independent services, written in either PHP or
JavaScript, and coordinates their configurations. Running a
wiki-with-services can be done on a shared node.js host like Heroku.

This is not intended as a production configuration for large wikis -- in
those cases having separate server farms for PHP, PHP services, and
JavaScript services is best: that independence is indeed the reason why
refactoring into services is desirable. But integrating the services into a
single process allows for hassle-free configuration and maintenance of
small wikis.

A proof-of-concept has been built. The node package php-embed
<https://www.npmjs.com/package/php-embed> embeds PHP 5.6.14 into a node.js
(>= 2.4.0) process, with bidirectional property and method access between
PHP and node. The package mediawiki-express
<https://www.npmjs.com/package/mediawiki-express> uses this to embed
MediaWiki into an express.js <http://expressjs.com/> HTTP server. (Other
HTTP server frameworks could equally well be used.)  A hook in the `
LocalSettings.php` allows you to configure the mediawiki instance in
JavaScript.

A bit of further hacking would allow you to fully configure the MediaWiki
instance (in either PHP or JavaScript) and to dispatch to Parsoid (running
in the same process).

*SUMMIT GOALS / FOR DISCUSSION*


   - Determine whether this technology (or something similar) might be an
   acceptable alternative for small sites which are currently using shared
   hosting.  See T113210 <https://phabricator.wikimedia.org/T113210> for
   related discussion.
   - Identify and address technical roadblocks to deploying modular
   single-server wikis (see below).
   - Discuss methods for deploying complex wikitext extensions.  For
   example, the WikiHiero
   <https://www.mediawiki.org/wiki/Extension:WikiHiero> extension would
   ideally be distributed with (a) PHP code hooking mediawiki core, (b)
   client-side JavaScript extending Visual Editor, and (c) server-side
   JavaScript extending Parsoid.  Can these be distributed as a single
   integrated bundle?


*TECHNICAL CHALLENGES*


   - Certain pieces of our code are hardwired to specific directories
   underneath mediawiki-core code.  This complicates efforts to run mediawiki
   from a "clean tree", and to distribute piece of mediawiki separately.  In
   particular:
   - It would be better if the `vendor` directory could (optionally) live
      outside the core mediawiki tree, so it could be distributed
separately from
      the main codebase, and allow for alternative package structures.
      - Extensions and skins would benefit from allowing a "path-like" list
      of directories, rather than a single location underneath the
core mediawiki
      tree.  Extensions/skins could be distributed as separate packages, with a
      simple hook to add their locations to the search path.
      - Tim Starling has suggested that when running in single-server mode,
   some internal APIs (for example, between mediawiki and Parsoid) would be
   better exposed as unix sockets on the filesystem, rather than as internet
   domain sockets bound to localhost.  For one, this would be more "secure by
   default" and avoid inadvertent exposure of internal service APIs.
   - It would be best to define a standardized mechanism for "services" to
   declare themselves & be connected & configured.  This may mean standard ro
   utes on a single-server install (`/w` and `/wiki` for core, `/parsoid`
   for parsoid, `/thumb` for the thumbnailer service, etc), standard ports
   for each service (with their own http servers), or else standard locations
   for unix sockets.
   - Can we leverage some of the various efforts to bridge composer and npm
   (for example <https://github.com/eloquent/composer-npm-bridge>), so we
   don't end up with incompatible packaging?

Phabricator ticket: https://phabricator.wikimedia.org/T114457

Download the code for mediawiki-express and play with it a bit and let's
discuss!
 --scott

--
(http://cscott.net)
_______________________________________________
Wikitech-l mailing list
[hidden email]
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Reply | Threaded
Open this post in threaded view
|

Re: [RFC/Summit] `npm install mediawiki-express`

Brion Vibber-4
That is just *all kinds* of awesome.

-- brion

On Thu, Nov 5, 2015 at 1:47 PM, C. Scott Ananian <[hidden email]>
wrote:

> Architecturally it may be desirable to factor our codebase into multiple
> independent services with clear APIs, but small wikis would clearly like a
> "single server" installation with all of the services running under one
> roof, as it were. Some options previously proposed have involved VM
> containers that bundle PHP, Node, MediaWiki and all required services into
> a preconfigured full system image. (T87774
> <https://phabricator.wikimedia.org/T87774>)
>
> This summit topic/RFC proposes an alternative: tightly integrating PHP/HHVM
> with a persistent server process running under node.js. The central service
> bundles together multiple independent services, written in either PHP or
> JavaScript, and coordinates their configurations. Running a
> wiki-with-services can be done on a shared node.js host like Heroku.
>
> This is not intended as a production configuration for large wikis -- in
> those cases having separate server farms for PHP, PHP services, and
> JavaScript services is best: that independence is indeed the reason why
> refactoring into services is desirable. But integrating the services into a
> single process allows for hassle-free configuration and maintenance of
> small wikis.
>
> A proof-of-concept has been built. The node package php-embed
> <https://www.npmjs.com/package/php-embed> embeds PHP 5.6.14 into a node.js
> (>= 2.4.0) process, with bidirectional property and method access between
> PHP and node. The package mediawiki-express
> <https://www.npmjs.com/package/mediawiki-express> uses this to embed
> MediaWiki into an express.js <http://expressjs.com/> HTTP server. (Other
> HTTP server frameworks could equally well be used.)  A hook in the `
> LocalSettings.php` allows you to configure the mediawiki instance in
> JavaScript.
>
> A bit of further hacking would allow you to fully configure the MediaWiki
> instance (in either PHP or JavaScript) and to dispatch to Parsoid (running
> in the same process).
>
> *SUMMIT GOALS / FOR DISCUSSION*
>
>
>    - Determine whether this technology (or something similar) might be an
>    acceptable alternative for small sites which are currently using shared
>    hosting.  See T113210 <https://phabricator.wikimedia.org/T113210> for
>    related discussion.
>    - Identify and address technical roadblocks to deploying modular
>    single-server wikis (see below).
>    - Discuss methods for deploying complex wikitext extensions.  For
>    example, the WikiHiero
>    <https://www.mediawiki.org/wiki/Extension:WikiHiero> extension would
>    ideally be distributed with (a) PHP code hooking mediawiki core, (b)
>    client-side JavaScript extending Visual Editor, and (c) server-side
>    JavaScript extending Parsoid.  Can these be distributed as a single
>    integrated bundle?
>
>
> *TECHNICAL CHALLENGES*
>
>
>    - Certain pieces of our code are hardwired to specific directories
>    underneath mediawiki-core code.  This complicates efforts to run
> mediawiki
>    from a "clean tree", and to distribute piece of mediawiki separately.
> In
>    particular:
>    - It would be better if the `vendor` directory could (optionally) live
>       outside the core mediawiki tree, so it could be distributed
> separately from
>       the main codebase, and allow for alternative package structures.
>       - Extensions and skins would benefit from allowing a "path-like" list
>       of directories, rather than a single location underneath the
> core mediawiki
>       tree.  Extensions/skins could be distributed as separate packages,
> with a
>       simple hook to add their locations to the search path.
>       - Tim Starling has suggested that when running in single-server mode,
>    some internal APIs (for example, between mediawiki and Parsoid) would be
>    better exposed as unix sockets on the filesystem, rather than as
> internet
>    domain sockets bound to localhost.  For one, this would be more "secure
> by
>    default" and avoid inadvertent exposure of internal service APIs.
>    - It would be best to define a standardized mechanism for "services" to
>    declare themselves & be connected & configured.  This may mean standard
> ro
>    utes on a single-server install (`/w` and `/wiki` for core, `/parsoid`
>    for parsoid, `/thumb` for the thumbnailer service, etc), standard ports
>    for each service (with their own http servers), or else standard
> locations
>    for unix sockets.
>    - Can we leverage some of the various efforts to bridge composer and npm
>    (for example <https://github.com/eloquent/composer-npm-bridge>), so we
>    don't end up with incompatible packaging?
>
> Phabricator ticket: https://phabricator.wikimedia.org/T114457
>
> Download the code for mediawiki-express and play with it a bit and let's
> discuss!
>  --scott
>
> --
> (http://cscott.net)
> _______________________________________________
> Wikitech-l mailing list
> [hidden email]
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
_______________________________________________
Wikitech-l mailing list
[hidden email]
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Reply | Threaded
Open this post in threaded view
|

Re: [RFC/Summit] `npm install mediawiki-express`

Ryan Lane-2
In reply to this post by C. Scott Ananian
Is this simply to support hosted providers? npm is one of the worst package
managers around. This really seems like a case where thin docker images and
docker-compose really shines. It's easy to handle from the packer side,
it's incredibly simple from the user side, and it doesn't require
reinventing the world to distribute things.

If this is the kind of stuff we're doing to support hosted providers, it
seems it's really time to stop supporting hosted providers. It's $5/month
to have a proper VM on digital ocean. There's even cheaper solutions
around. Hosted providers at this point aren't cheaper. At best they're
slightly easier to use, but MediaWiki is seriously handicapping itself to
support this use-case.

On Thu, Nov 5, 2015 at 1:47 PM, C. Scott Ananian <[hidden email]>
wrote:

> Architecturally it may be desirable to factor our codebase into multiple
> independent services with clear APIs, but small wikis would clearly like a
> "single server" installation with all of the services running under one
> roof, as it were. Some options previously proposed have involved VM
> containers that bundle PHP, Node, MediaWiki and all required services into
> a preconfigured full system image. (T87774
> <https://phabricator.wikimedia.org/T87774>)
>
> This summit topic/RFC proposes an alternative: tightly integrating PHP/HHVM
> with a persistent server process running under node.js. The central service
> bundles together multiple independent services, written in either PHP or
> JavaScript, and coordinates their configurations. Running a
> wiki-with-services can be done on a shared node.js host like Heroku.
>
> This is not intended as a production configuration for large wikis -- in
> those cases having separate server farms for PHP, PHP services, and
> JavaScript services is best: that independence is indeed the reason why
> refactoring into services is desirable. But integrating the services into a
> single process allows for hassle-free configuration and maintenance of
> small wikis.
>
> A proof-of-concept has been built. The node package php-embed
> <https://www.npmjs.com/package/php-embed> embeds PHP 5.6.14 into a node.js
> (>= 2.4.0) process, with bidirectional property and method access between
> PHP and node. The package mediawiki-express
> <https://www.npmjs.com/package/mediawiki-express> uses this to embed
> MediaWiki into an express.js <http://expressjs.com/> HTTP server. (Other
> HTTP server frameworks could equally well be used.)  A hook in the `
> LocalSettings.php` allows you to configure the mediawiki instance in
> JavaScript.
>
> A bit of further hacking would allow you to fully configure the MediaWiki
> instance (in either PHP or JavaScript) and to dispatch to Parsoid (running
> in the same process).
>
> *SUMMIT GOALS / FOR DISCUSSION*
>
>
>    - Determine whether this technology (or something similar) might be an
>    acceptable alternative for small sites which are currently using shared
>    hosting.  See T113210 <https://phabricator.wikimedia.org/T113210> for
>    related discussion.
>    - Identify and address technical roadblocks to deploying modular
>    single-server wikis (see below).
>    - Discuss methods for deploying complex wikitext extensions.  For
>    example, the WikiHiero
>    <https://www.mediawiki.org/wiki/Extension:WikiHiero> extension would
>    ideally be distributed with (a) PHP code hooking mediawiki core, (b)
>    client-side JavaScript extending Visual Editor, and (c) server-side
>    JavaScript extending Parsoid.  Can these be distributed as a single
>    integrated bundle?
>
>
> *TECHNICAL CHALLENGES*
>
>
>    - Certain pieces of our code are hardwired to specific directories
>    underneath mediawiki-core code.  This complicates efforts to run
> mediawiki
>    from a "clean tree", and to distribute piece of mediawiki separately.
> In
>    particular:
>    - It would be better if the `vendor` directory could (optionally) live
>       outside the core mediawiki tree, so it could be distributed
> separately from
>       the main codebase, and allow for alternative package structures.
>       - Extensions and skins would benefit from allowing a "path-like" list
>       of directories, rather than a single location underneath the
> core mediawiki
>       tree.  Extensions/skins could be distributed as separate packages,
> with a
>       simple hook to add their locations to the search path.
>       - Tim Starling has suggested that when running in single-server mode,
>    some internal APIs (for example, between mediawiki and Parsoid) would be
>    better exposed as unix sockets on the filesystem, rather than as
> internet
>    domain sockets bound to localhost.  For one, this would be more "secure
> by
>    default" and avoid inadvertent exposure of internal service APIs.
>    - It would be best to define a standardized mechanism for "services" to
>    declare themselves & be connected & configured.  This may mean standard
> ro
>    utes on a single-server install (`/w` and `/wiki` for core, `/parsoid`
>    for parsoid, `/thumb` for the thumbnailer service, etc), standard ports
>    for each service (with their own http servers), or else standard
> locations
>    for unix sockets.
>    - Can we leverage some of the various efforts to bridge composer and npm
>    (for example <https://github.com/eloquent/composer-npm-bridge>), so we
>    don't end up with incompatible packaging?
>
> Phabricator ticket: https://phabricator.wikimedia.org/T114457
>
> Download the code for mediawiki-express and play with it a bit and let's
> discuss!
>  --scott
>
> --
> (http://cscott.net)
> _______________________________________________
> Wikitech-l mailing list
> [hidden email]
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
_______________________________________________
Wikitech-l mailing list
[hidden email]
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Reply | Threaded
Open this post in threaded view
|

Re: [RFC/Summit] `npm install mediawiki-express`

C. Scott Ananian
I view it as partly an effort to counteract the perceived complexity of
running a forest full of separate services.  It's fine to say they're all
preinstalled in this VM image, but that's still a lot of complexity to dig
through: where are the all the servers? What ports are they listening all?
Did one of them crash?  How do I restart it?

For some users, the VM (or an actual server farm) is indeed the right
solution.  But this was an attempt to see if I could recapture the
"everything's here in this one process (and one code tree)" simplicity for
those for whom that's good enough.
  --scott
_______________________________________________
Wikitech-l mailing list
[hidden email]
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Reply | Threaded
Open this post in threaded view
|

Re: [RFC/Summit] `npm install mediawiki-express`

Amazon Sec. Team messages-noreply@amazon.com
In reply to this post by Ryan Lane-2
2015. 11. 6. 오전 8:26에 "Ryan Lane" <[hidden email]>님이 작성:
>
> Is this simply to support hosted providers? npm is one of the worst
package
> managers around. This really seems like a case where thin docker images
and

> docker-compose really shines. It's easy to handle from the packer side,
> it's incredibly simple from the user side, and it doesn't require
> reinventing the world to distribute things.
>
> If this is the kind of stuff we're doing to support hosted providers, it
> seems it's really time to stop supporting hosted providers. It's $5/month
> to have a proper VM on digital ocean. There's even cheaper solutions
> around. Hosted providers at this point aren't cheaper. At best they're
> slightly easier to use, but MediaWiki is seriously handicapping itself to
> support this use-case.
>
>

Please remember, not everyone is technically-enlighted to use VMs.
--
revi
https://revi.me
-- Sent from Android --
2015. 11. 6. 오전 8:26에 "Ryan Lane" <[hidden email]>님이 작성:

Is this simply to support hosted providers? npm is one of the worst package
managers around. This really seems like a case where thin docker images and
docker-compose really shines. It's easy to handle from the packer side,
it's incredibly simple from the user side, and it doesn't require
reinventing the world to distribute things.

If this is the kind of stuff we're doing to support hosted providers, it
seems it's really time to stop supporting hosted providers. It's $5/month
to have a proper VM on digital ocean. There's even cheaper solutions
around. Hosted providers at this point aren't cheaper. At best they're
slightly easier to use, but MediaWiki is seriously handicapping itself to
support this use-case.

On Thu, Nov 5, 2015 at 1:47 PM, C. Scott Ananian <[hidden email]>
wrote:

> Architecturally it may be desirable to factor our codebase into multiple
> independent services with clear APIs, but small wikis would clearly like a
> "single server" installation with all of the services running under one
> roof, as it were. Some options previously proposed have involved VM
> containers that bundle PHP, Node, MediaWiki and all required services into
> a preconfigured full system image. (T87774
> <https://phabricator.wikimedia.org/T87774>)
>
> This summit topic/RFC proposes an alternative: tightly integrating
PHP/HHVM
> with a persistent server process running under node.js. The central
service
> bundles together multiple independent services, written in either PHP or
> JavaScript, and coordinates their configurations. Running a
> wiki-with-services can be done on a shared node.js host like Heroku.
>
> This is not intended as a production configuration for large wikis -- in
> those cases having separate server farms for PHP, PHP services, and
> JavaScript services is best: that independence is indeed the reason why
> refactoring into services is desirable. But integrating the services into
a

> single process allows for hassle-free configuration and maintenance of
> small wikis.
>
> A proof-of-concept has been built. The node package php-embed
> <https://www.npmjs.com/package/php-embed> embeds PHP 5.6.14 into a node.js
> (>= 2.4.0) process, with bidirectional property and method access between
> PHP and node. The package mediawiki-express
> <https://www.npmjs.com/package/mediawiki-express> uses this to embed
> MediaWiki into an express.js <http://expressjs.com/> HTTP server. (Other
> HTTP server frameworks could equally well be used.)  A hook in the `
> LocalSettings.php` allows you to configure the mediawiki instance in
> JavaScript.
>
> A bit of further hacking would allow you to fully configure the MediaWiki
> instance (in either PHP or JavaScript) and to dispatch to Parsoid (running
> in the same process).
>
> *SUMMIT GOALS / FOR DISCUSSION*
>
>
>    - Determine whether this technology (or something similar) might be an
>    acceptable alternative for small sites which are currently using shared
>    hosting.  See T113210 <https://phabricator.wikimedia.org/T113210> for
>    related discussion.
>    - Identify and address technical roadblocks to deploying modular
>    single-server wikis (see below).
>    - Discuss methods for deploying complex wikitext extensions.  For
>    example, the WikiHiero
>    <https://www.mediawiki.org/wiki/Extension:WikiHiero> extension would
>    ideally be distributed with (a) PHP code hooking mediawiki core, (b)
>    client-side JavaScript extending Visual Editor, and (c) server-side
>    JavaScript extending Parsoid.  Can these be distributed as a single
>    integrated bundle?
>
>
> *TECHNICAL CHALLENGES*
>
>
>    - Certain pieces of our code are hardwired to specific directories
>    underneath mediawiki-core code.  This complicates efforts to run
> mediawiki
>    from a "clean tree", and to distribute piece of mediawiki separately.
> In
>    particular:
>    - It would be better if the `vendor` directory could (optionally) live
>       outside the core mediawiki tree, so it could be distributed
> separately from
>       the main codebase, and allow for alternative package structures.
>       - Extensions and skins would benefit from allowing a "path-like"
list
>       of directories, rather than a single location underneath the
> core mediawiki
>       tree.  Extensions/skins could be distributed as separate packages,
> with a
>       simple hook to add their locations to the search path.
>       - Tim Starling has suggested that when running in single-server
mode,
>    some internal APIs (for example, between mediawiki and Parsoid) would
be

>    better exposed as unix sockets on the filesystem, rather than as
> internet
>    domain sockets bound to localhost.  For one, this would be more "secure
> by
>    default" and avoid inadvertent exposure of internal service APIs.
>    - It would be best to define a standardized mechanism for "services" to
>    declare themselves & be connected & configured.  This may mean standard
> ro
>    utes on a single-server install (`/w` and `/wiki` for core, `/parsoid`
>    for parsoid, `/thumb` for the thumbnailer service, etc), standard ports
>    for each service (with their own http servers), or else standard
> locations
>    for unix sockets.
>    - Can we leverage some of the various efforts to bridge composer and
npm

>    (for example <https://github.com/eloquent/composer-npm-bridge>), so we
>    don't end up with incompatible packaging?
>
> Phabricator ticket: https://phabricator.wikimedia.org/T114457
>
> Download the code for mediawiki-express and play with it a bit and let's
> discuss!
>  --scott
>
> --
> (http://cscott.net)
> _______________________________________________
> Wikitech-l mailing list
> [hidden email]
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
_______________________________________________
Wikitech-l mailing list
[hidden email]
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
_______________________________________________
Wikitech-l mailing list
[hidden email]
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Reply | Threaded
Open this post in threaded view
|

Re: [RFC/Summit] `npm install mediawiki-express`

Tyler Romeo
Using VMs does not take any "technical enlightenment". There are containerization tools that are relatively simple to use for non-scale environments, and learning them would take just as much time as learning how to use npm.

(Note, I'm not arguing against this solution, which I think is pretty cool. Just wanted to speak out against the concept that npm is somehow far easier to use than any other solution.)

-- 
Tyler Romeo
https://parent5446.nyc
0x405D34A7C86B42DF

From: Yongmin Hong <[hidden email]>
Reply: Wikimedia developers <[hidden email]>
Date: November 5, 2015 at 23:36:07
To: Wikimedia developers <[hidden email]>
Subject:  Re: [Wikitech-l] [RFC/Summit] `npm install mediawiki-express`  

2015. 11. 6. 오전 8:26에 "Ryan Lane" <[hidden email]>님이 작성:
>
> Is this simply to support hosted providers? npm is one of the worst
package
> managers around. This really seems like a case where thin docker images
and

> docker-compose really shines. It's easy to handle from the packer side,
> it's incredibly simple from the user side, and it doesn't require
> reinventing the world to distribute things.
>
> If this is the kind of stuff we're doing to support hosted providers, it
> seems it's really time to stop supporting hosted providers. It's $5/month
> to have a proper VM on digital ocean. There's even cheaper solutions
> around. Hosted providers at this point aren't cheaper. At best they're
> slightly easier to use, but MediaWiki is seriously handicapping itself to
> support this use-case.
>
>
Please remember, not everyone is technically-enlighted to use VMs.
--
revi
https://revi.me
-- Sent from Android --
2015. 11. 6. 오전 8:26에 "Ryan Lane" <[hidden email]>님이 작성:

Is this simply to support hosted providers? npm is one of the worst package
managers around. This really seems like a case where thin docker images and
docker-compose really shines. It's easy to handle from the packer side,
it's incredibly simple from the user side, and it doesn't require
reinventing the world to distribute things.

If this is the kind of stuff we're doing to support hosted providers, it
seems it's really time to stop supporting hosted providers. It's $5/month
to have a proper VM on digital ocean. There's even cheaper solutions
around. Hosted providers at this point aren't cheaper. At best they're
slightly easier to use, but MediaWiki is seriously handicapping itself to
support this use-case.

On Thu, Nov 5, 2015 at 1:47 PM, C. Scott Ananian <[hidden email]>
wrote:

> Architecturally it may be desirable to factor our codebase into multiple
> independent services with clear APIs, but small wikis would clearly like a
> "single server" installation with all of the services running under one
> roof, as it were. Some options previously proposed have involved VM
> containers that bundle PHP, Node, MediaWiki and all required services into
> a preconfigured full system image. (T87774
> <https://phabricator.wikimedia.org/T87774>)
>
> This summit topic/RFC proposes an alternative: tightly integrating
PHP/HHVM
> with a persistent server process running under node.js. The central
service
> bundles together multiple independent services, written in either PHP or
> JavaScript, and coordinates their configurations. Running a
> wiki-with-services can be done on a shared node.js host like Heroku.
>
> This is not intended as a production configuration for large wikis -- in
> those cases having separate server farms for PHP, PHP services, and
> JavaScript services is best: that independence is indeed the reason why
> refactoring into services is desirable. But integrating the services into
a

> single process allows for hassle-free configuration and maintenance of
> small wikis.
>
> A proof-of-concept has been built. The node package php-embed
> <https://www.npmjs.com/package/php-embed> embeds PHP 5.6.14 into a node.js
> (>= 2.4.0) process, with bidirectional property and method access between
> PHP and node. The package mediawiki-express
> <https://www.npmjs.com/package/mediawiki-express> uses this to embed
> MediaWiki into an express.js <http://expressjs.com/> HTTP server. (Other
> HTTP server frameworks could equally well be used.) A hook in the `
> LocalSettings.php` allows you to configure the mediawiki instance in
> JavaScript.
>
> A bit of further hacking would allow you to fully configure the MediaWiki
> instance (in either PHP or JavaScript) and to dispatch to Parsoid (running
> in the same process).
>
> *SUMMIT GOALS / FOR DISCUSSION*
>
>
> - Determine whether this technology (or something similar) might be an
> acceptable alternative for small sites which are currently using shared
> hosting. See T113210 <https://phabricator.wikimedia.org/T113210> for
> related discussion.
> - Identify and address technical roadblocks to deploying modular
> single-server wikis (see below).
> - Discuss methods for deploying complex wikitext extensions. For
> example, the WikiHiero
> <https://www.mediawiki.org/wiki/Extension:WikiHiero> extension would
> ideally be distributed with (a) PHP code hooking mediawiki core, (b)
> client-side JavaScript extending Visual Editor, and (c) server-side
> JavaScript extending Parsoid. Can these be distributed as a single
> integrated bundle?
>
>
> *TECHNICAL CHALLENGES*
>
>
> - Certain pieces of our code are hardwired to specific directories
> underneath mediawiki-core code. This complicates efforts to run
> mediawiki
> from a "clean tree", and to distribute piece of mediawiki separately.
> In
> particular:
> - It would be better if the `vendor` directory could (optionally) live
> outside the core mediawiki tree, so it could be distributed
> separately from
> the main codebase, and allow for alternative package structures.
> - Extensions and skins would benefit from allowing a "path-like"
list
> of directories, rather than a single location underneath the
> core mediawiki
> tree. Extensions/skins could be distributed as separate packages,
> with a
> simple hook to add their locations to the search path.
> - Tim Starling has suggested that when running in single-server
mode,
> some internal APIs (for example, between mediawiki and Parsoid) would
be

> better exposed as unix sockets on the filesystem, rather than as
> internet
> domain sockets bound to localhost. For one, this would be more "secure
> by
> default" and avoid inadvertent exposure of internal service APIs.
> - It would be best to define a standardized mechanism for "services" to
> declare themselves & be connected & configured. This may mean standard
> ro
> utes on a single-server install (`/w` and `/wiki` for core, `/parsoid`
> for parsoid, `/thumb` for the thumbnailer service, etc), standard ports
> for each service (with their own http servers), or else standard
> locations
> for unix sockets.
> - Can we leverage some of the various efforts to bridge composer and
npm

> (for example <https://github.com/eloquent/composer-npm-bridge>), so we
> don't end up with incompatible packaging?
>
> Phabricator ticket: https://phabricator.wikimedia.org/T114457
>
> Download the code for mediawiki-express and play with it a bit and let's
> discuss!
> --scott
>
> --
> (http://cscott.net)
> _______________________________________________
> Wikitech-l mailing list
> [hidden email]
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
_______________________________________________
Wikitech-l mailing list
[hidden email]
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
_______________________________________________
Wikitech-l mailing list
[hidden email]
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
_______________________________________________
Wikitech-l mailing list
[hidden email]
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

signature.asc (852 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: [RFC/Summit] `npm install mediawiki-express`

Ryan Lane-2
In reply to this post by C. Scott Ananian
On Thu, Nov 5, 2015 at 5:38 PM, C. Scott Ananian <[hidden email]>
wrote:

> I view it as partly an effort to counteract the perceived complexity of
> running a forest full of separate services.  It's fine to say they're all
> preinstalled in this VM image, but that's still a lot of complexity to dig
> through: where are the all the servers? What ports are they listening all?
> Did one of them crash?  How do I restart it?
>
>
When you run docker-compose, your containers are linked together. If you
have the following containers:

parsoid
mathoid
mediawiki
mysql (hopefully not)
cassandra
redis

You'd talk to redis from mediawiki via: redis://redis:6379 and you'd talk
to parsoid via: http://parsoid and to mathoid via http://mathoid, etc etc.
It handles the networking for you.

If one of them crash then docker compose will tell you. If any of them fail
to start it will also tell you.

I'm not even a huge proponent of docker, but the docker-compose solution
for this is way more simple and way more standard than what you're
proposing and it doesn't investing a ton of effort into something that no
one other project will ever consider using.

Ride the ocean on the big boat, not the life raft.


> For some users, the VM (or an actual server farm) is indeed the right
> solution.  But this was an attempt to see if I could recapture the
> "everything's here in this one process (and one code tree)" simplicity for
> those for whom that's good enough.
>

There's no server farm here. If you're running linux it's just a set of
processes running in containers on a single node (which could be your
laptop). If you're on OSX or Windows it's a VM, but that can be totally
abstracted away using Vagrant.

If you're launching in the cloud, you could launch directly to joyent or
AWS ECS or very easily stand something up on digital ocean. If you're
really feeling like making things easier for end-users, provide
orchestration code that will automatically provision MW and its depedencies
via docker-compose in a VM in one of these services.

Orchestration + containers is what most people are doing for microservices.
Don't make something complex to maintain that's completely out of the
ordinary out of fears of complexity. Go with the solutions everyone else is
using and wrap tooling around it to make it easier for people.

- Ryan
_______________________________________________
Wikitech-l mailing list
[hidden email]
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Reply | Threaded
Open this post in threaded view
|

Re: [RFC/Summit] `npm install mediawiki-express`

C. Scott Ananian
In reply to this post by Tyler Romeo
Let's not let this discussion sidetrack into "shared hosting vs VMs (vs
docker?)" --- there's another phabricator ticket and summit topic for that (
https://phabricator.wikimedia.org/T87774 and
https://phabricator.wikimedia.org/T113210.

I'd prefer to have discussion in *this* particular task/thread concentrate
on:

* Hey, we can have JavaScript and PHP in the same packaging system.  What
cool things might that enable?

* Hey, we can have JavaScript and PHP running together in the same server.
Perhaps some persistence-related issues with PHP can be made easier?

* Hey, we can actually write *extensions for mediawiki-core* in JavaScript
(or CoffeeScript, or...) now.  Or run PHP code inside Parsoid.  How could
we use that?  (Could it grow developer communities?)

* How are parser extensions (like, say, WikiHiero, but there are lots of
them) going to be managed in the long term?  There are three separate
codebases to hook right now.  An extension like <gallery> might eventually
need to hook the image thumbnail service, too.  Do we have a plan?

And the pro/anti-npm and pro/anti-docker and pro/anti-VM discussion can go
into one of those other tasks.  Thanks.

 --scott
_______________________________________________
Wikitech-l mailing list
[hidden email]
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Reply | Threaded
Open this post in threaded view
|

Re: [RFC/Summit] `npm install mediawiki-express`

Greg Grossmeier-2
<quote name="C. Scott Ananian" date="2015-11-06" time="14:13:58 -0500">
> And the pro/anti-npm and pro/anti-docker and pro/anti-VM discussion can go
> into one of those other tasks.  Thanks.

Pre-mature segmentation into a walled-off subset of problems has
similarly bad inherent issues with pre-mature incorporation into larger
problems; it's a trade off. Do we want to discuss the issue at a high
level? Yes. Do we want to discuss at a low-level/specific
implementation? Also yes. But let others discuss the high level if they
want (while others should discuss the low-level if they want). Separate
threads maybe :)

Greg

--
| Greg Grossmeier            GPG: B2FA 27B1 F7EB D327 6B8E |
| identi.ca: @greg                A18D 1138 8E47 FAC8 1C7D |

_______________________________________________
Wikitech-l mailing list
[hidden email]
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Reply | Threaded
Open this post in threaded view
|

Re: [RFC/Summit] `npm install mediawiki-express`

Tyler Romeo
In reply to this post by C. Scott Ananian
I would very, *very* much prefer to not have MediaWiki core extensions written in JavaScript. Even beyond my criticisms of JavaScript as a language, I feel like that just unnecessarily introduces complexity. The purpose of this wrapper is to combine separate micro-services that would otherwise be run in separate VMs / servers / etc. so that it can easily be run in a hosting setup.

Otherwise, I'm interested in what implications this will have, especially for making MediaWiki easier to install and use, which would be awesome.

-- 
Tyler Romeo
https://parent5446.nyc
0x405D34A7C86B42DF

From: C. Scott Ananian <[hidden email]>
Reply: Wikimedia developers <[hidden email]>
Date: November 6, 2015 at 14:14:13
To: Wikimedia developers <[hidden email]>
Subject:  Re: [Wikitech-l] [RFC/Summit] `npm install mediawiki-express`  

Let's not let this discussion sidetrack into "shared hosting vs VMs (vs
docker?)" --- there's another phabricator ticket and summit topic for that (
https://phabricator.wikimedia.org/T87774 and
https://phabricator.wikimedia.org/T113210.

I'd prefer to have discussion in *this* particular task/thread concentrate
on:

* Hey, we can have JavaScript and PHP in the same packaging system. What
cool things might that enable?

* Hey, we can have JavaScript and PHP running together in the same server.
Perhaps some persistence-related issues with PHP can be made easier?

* Hey, we can actually write *extensions for mediawiki-core* in JavaScript
(or CoffeeScript, or...) now. Or run PHP code inside Parsoid. How could
we use that? (Could it grow developer communities?)

* How are parser extensions (like, say, WikiHiero, but there are lots of
them) going to be managed in the long term? There are three separate
codebases to hook right now. An extension like <gallery> might eventually
need to hook the image thumbnail service, too. Do we have a plan?

And the pro/anti-npm and pro/anti-docker and pro/anti-VM discussion can go
into one of those other tasks. Thanks.

--scott
_______________________________________________
Wikitech-l mailing list
[hidden email]
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
_______________________________________________
Wikitech-l mailing list
[hidden email]
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

signature.asc (852 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: [RFC/Summit] `npm install mediawiki-express`

C. Scott Ananian
Tyler: I hear you.  I'm not sure it's a good idea, either -- especially not
for core extensions used in production.

But it does perhaps allow some expansion of our developer community on the
fringes, and makes writing extensions possible for a larger set of people?
And perhaps there are some cool things written in JavaScript which the
extended community could more easily hook up to MediaWiki using `php-embed`.

I'm not sure that there are.  I'm just opening up the discussion to see if
anyone pipes up with, "oh, yeah, I've always wanted to do XYZ!".

Greg: I agree re: premature stifling of discussion.  I'm just saying that
"high-level" conversation is already happening elsewhere, and it's more
productive there.  I started *this* particular thread trying to elicit
discussion more narrowly focused on the thing I've just built.
  --scott

On Fri, Nov 6, 2015 at 2:30 PM, Tyler Romeo <[hidden email]> wrote:

> I would very, *very* much prefer to not have MediaWiki core extensions
> written in JavaScript. Even beyond my criticisms of JavaScript as a
> language, I feel like that just unnecessarily introduces complexity. The
> purpose of this wrapper is to combine separate micro-services that would
> otherwise be run in separate VMs / servers / etc. so that it can easily be
> run in a hosting setup.
>
> Otherwise, I'm interested in what implications this will have, especially
> for making MediaWiki easier to install and use, which would be awesome.
>
> --
> Tyler Romeo
> https://parent5446.nyc
> 0x405D34A7C86B42DF
>
> From: C. Scott Ananian <[hidden email]> <[hidden email]>
> Reply: Wikimedia developers <[hidden email]>
> <[hidden email]>
> Date: November 6, 2015 at 14:14:13
> To: Wikimedia developers <[hidden email]>
> <[hidden email]>
> Subject:  Re: [Wikitech-l] [RFC/Summit] `npm install mediawiki-express`
>
> Let's not let this discussion sidetrack into "shared hosting vs VMs (vs
> docker?)" --- there's another phabricator ticket and summit topic for that
> (
> https://phabricator.wikimedia.org/T87774 and
> https://phabricator.wikimedia.org/T113210.
>
> I'd prefer to have discussion in *this* particular task/thread concentrate
> on:
>
> * Hey, we can have JavaScript and PHP in the same packaging system. What
> cool things might that enable?
>
> * Hey, we can have JavaScript and PHP running together in the same server.
> Perhaps some persistence-related issues with PHP can be made easier?
>
> * Hey, we can actually write *extensions for mediawiki-core* in JavaScript
> (or CoffeeScript, or...) now. Or run PHP code inside Parsoid. How could
> we use that? (Could it grow developer communities?)
>
> * How are parser extensions (like, say, WikiHiero, but there are lots of
> them) going to be managed in the long term? There are three separate
> codebases to hook right now. An extension like <gallery> might eventually
> need to hook the image thumbnail service, too. Do we have a plan?
>
> And the pro/anti-npm and pro/anti-docker and pro/anti-VM discussion can go
> into one of those other tasks. Thanks.
>
> --scott
> _______________________________________________
> Wikitech-l mailing list
> [hidden email]
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
>


--
(http://cscott.net)
_______________________________________________
Wikitech-l mailing list
[hidden email]
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Reply | Threaded
Open this post in threaded view
|

Re: [RFC/Summit] `npm install mediawiki-express`

Tyler Romeo
That's a pretty good point. Despite my comments, I'll definitely keep an open mind, and am interested in what people might propose.

-- 
Tyler Romeo
https://parent5446.nyc
0x405D34A7C86B42DF

From: C. Scott Ananian <[hidden email]>
Reply: C. Scott Ananian <[hidden email]>
Date: November 6, 2015 at 15:01:59
To: Tyler Romeo <[hidden email]>
CC: Wikimedia developers <[hidden email]>
Subject:  Re: [Wikitech-l] [RFC/Summit] `npm install mediawiki-express`  

Tyler: I hear you.  I'm not sure it's a good idea, either -- especially not for core extensions used in production.

But it does perhaps allow some expansion of our developer community on the fringes, and makes writing extensions possible for a larger set of people?  And perhaps there are some cool things written in JavaScript which the extended community could more easily hook up to MediaWiki using `php-embed`.

I'm not sure that there are.  I'm just opening up the discussion to see if anyone pipes up with, "oh, yeah, I've always wanted to do XYZ!".

Greg: I agree re: premature stifling of discussion.  I'm just saying that "high-level" conversation is already happening elsewhere, and it's more productive there.  I started *this* particular thread trying to elicit discussion more narrowly focused on the thing I've just built.
  --scott

On Fri, Nov 6, 2015 at 2:30 PM, Tyler Romeo <[hidden email]> wrote:
I would very, *very* much prefer to not have MediaWiki core extensions written in JavaScript. Even beyond my criticisms of JavaScript as a language, I feel like that just unnecessarily introduces complexity. The purpose of this wrapper is to combine separate micro-services that would otherwise be run in separate VMs / servers / etc. so that it can easily be run in a hosting setup.

Otherwise, I'm interested in what implications this will have, especially for making MediaWiki easier to install and use, which would be awesome.

-- 
Tyler Romeo
https://parent5446.nyc
0x405D34A7C86B42DF

From: C. Scott Ananian <[hidden email]>
Reply: Wikimedia developers <[hidden email]>
Date: November 6, 2015 at 14:14:13
To: Wikimedia developers <[hidden email]>
Subject:  Re: [Wikitech-l] [RFC/Summit] `npm install mediawiki-express`

Let's not let this discussion sidetrack into "shared hosting vs VMs (vs
docker?)" --- there's another phabricator ticket and summit topic for that (
https://phabricator.wikimedia.org/T87774 and
https://phabricator.wikimedia.org/T113210.

I'd prefer to have discussion in *this* particular task/thread concentrate
on:

* Hey, we can have JavaScript and PHP in the same packaging system. What
cool things might that enable?

* Hey, we can have JavaScript and PHP running together in the same server.
Perhaps some persistence-related issues with PHP can be made easier?

* Hey, we can actually write *extensions for mediawiki-core* in JavaScript
(or CoffeeScript, or...) now. Or run PHP code inside Parsoid. How could
we use that? (Could it grow developer communities?)

* How are parser extensions (like, say, WikiHiero, but there are lots of
them) going to be managed in the long term? There are three separate
codebases to hook right now. An extension like <gallery> might eventually
need to hook the image thumbnail service, too. Do we have a plan?

And the pro/anti-npm and pro/anti-docker and pro/anti-VM discussion can go
into one of those other tasks. Thanks.

--scott
_______________________________________________
Wikitech-l mailing list
[hidden email]
https://lists.wikimedia.org/mailman/listinfo/wikitech-l



--
(http://cscott.net)
_______________________________________________
Wikitech-l mailing list
[hidden email]
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

signature.asc (852 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: [RFC/Summit] `npm install mediawiki-express`

Marcin Cieslak-3
In reply to this post by Ryan Lane-2
On 2015-11-05, Ryan Lane <[hidden email]> wrote:
> Is this simply to support hosted providers? npm is one of the worst package
> managers around. This really seems like a case where thin docker images and
> docker-compose really shines. It's easy to handle from the packer side,
> it's incredibly simple from the user side, and it doesn't require
> reinventing the world to distribute things.

I got heavily involved in to node world recently and I fully share your opinion
about npm and npm@3 takes the disaster to the next level.

Are we using some native npm modules in our stack? *That* is hard
to support.

> If this is the kind of stuff we're doing to support hosted providers, it
> seems it's really time to stop supporting hosted providers. It's $5/month
> to have a proper VM on digital ocean. There's even cheaper solutions
> around. Hosted providers at this point aren't cheaper. At best they're
> slightly easier to use, but MediaWiki is seriously handicapping itself to
> support this use-case.

I feel very strongly there is a need for a quick setup for people who
have their LAMP stack already working and feel familiar with that environment.
The problem is that a full-stack MediaWiki is no longer a LAMP application.
Those people aren't going away any soon and joining the coolest game in town.

I have already written scripts to keep code, vendor and core skins in sync
from git. I am beginning to write even more scripts to quickly deploy/destroy MW
instances. (My platform does not do Docker, btw.).

Maybe the right strategic move will be to implement MediaWiki phase
four in the server-side JavaScript. Then the npm way is probably the only way
forward.

Saper


_______________________________________________
Wikitech-l mailing list
[hidden email]
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Reply | Threaded
Open this post in threaded view
|

Re: [RFC/Summit] `npm install mediawiki-express`

Brion Vibber-4
In reply to this post by C. Scott Ananian
On Fri, Nov 6, 2015 at 11:13 AM, C. Scott Ananian <[hidden email]>
wrote:
>
> * Hey, we can have JavaScript and PHP in the same packaging system.  What
> cool things might that enable?
>
> * Hey, we can have JavaScript and PHP running together in the same server.
> Perhaps some persistence-related issues with PHP can be made easier?
>

We probably wouldn't want to break the PHP execution context concept that
requests are self-contained (and failing out is reasonably safe). But you
could for instance route sessions or cache data through the containing node
server instead of on the filesystem or a separate memcache/etc service...


> * Hey, we can actually write *extensions for mediawiki-core* in JavaScript
> (or CoffeeScript, or...) now.  Or run PHP code inside Parsoid.  How could
> we use that?  (Could it grow developer communities?)
>

I'm uncertain about the desirability of general direct JS<->PHP sync call
bridging, in that relying on it would _require_ this particular node+PHP
distribution. I'd prefer loose enough coupling that the JS engine can be
local or remote, and the PHP engine can be either Zend or HHVM, etc.

Of course there are interesting possibilities like using JS as a template
module extension language in place of / addition to Lua. A general warning:
as I understand the php-embed bridge, JS-side code would a) have full
rights to the system within the user the daemon runs as, and b)
exiting/failing out of node would kill the entire daemon.

PHP-inside-Parsoid might be interesting for some kinds of extensions, but
I'm not sure whether it's better to rig that up versus using looser
coupling where we make an internal HTTP call over to the PHP MediaWiki side.


> * How are parser extensions (like, say, WikiHiero, but there are lots of
> them) going to be managed in the long term?  There are three separate
> codebases to hook right now.  An extension like <gallery> might eventually
> need to hook the image thumbnail service, too.  Do we have a plan?
>

This probably deserves its own thread!

Ideally you should only have to write one implementation, and it should be
self-contained or access the container via a limited API.

I'm not really sure I grasp how Parsoid handles tag hook extensions at the
moment, actually... can anyone fill in some details?


Note that conceptually we have a few different types of parser tag hook
extension:

* the standalone renderer (<math>, <hiero>, etc) -- these still need
storage for output caching, or CSS/JS/image assets that need serving. These
are easy to 'call out' to an external service for, which would make it easy
for parsoid to call MediaWiki, or for both to call a common separate
rendering implementation.

* the standalone wikitext wrapper/modifier (<nowiki>, <pre>, <poem>,
<syntaxhighlight>) -- ideally these can be implemented mostly in terms of
wikitext transforms :) but may need assets, again, such as highlighting
CSS. Again mostly standalone, easy to transform in one place and return the
data to be included in output.

* the standalone renderer that needs access to the wiki's content and
rendering (<gallery>) -- could be implemented as a Lua module I bet! ;)
These require back-and-forth with the rest of the MediaWiki system... but
could easily be done on the MediaWiki side and output copied into the
parsoid HTML.

* the state-carrying wikitext wrapper/modifier (<ref>+<references>) --
these require strict ordering, and build up state over the course of
parsing, *and* render things into wikitext, and .... well it's just ugly.

* weird stuff (labeled section transclusion? translate?) -- not even sure
how some of these work ;)


-- brion
_______________________________________________
Wikitech-l mailing list
[hidden email]
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Reply | Threaded
Open this post in threaded view
|

Re: [RFC/Summit] `npm install mediawiki-express`

Ryan Lane-2
In reply to this post by C. Scott Ananian
On Fri, Nov 6, 2015 at 11:13 AM, C. Scott Ananian <[hidden email]>
wrote:

> Let's not let this discussion sidetrack into "shared hosting vs VMs (vs
> docker?)" --- there's another phabricator ticket and summit topic for that
> (
> https://phabricator.wikimedia.org/T87774 and
> https://phabricator.wikimedia.org/T113210.
>
>
I only mentioned this portion of the discussion because I can't think of
any other reason your initial proposal makes sense, since it's essentially
discussing ways to distribute and run a set of microservices. Using docker
requires root, which isn't available on shared hosting. I'm fine ignoring
this topic in this discussion, though.


> I'd prefer to have discussion in *this* particular task/thread concentrate
> on:
>
> * Hey, we can have JavaScript and PHP in the same packaging system.  What
> cool things might that enable?
>
>
* Hey, we can have JavaScript and PHP running together in the same server.
> Perhaps some persistence-related issues with PHP can be made easier?
>
> * Hey, we can actually write *extensions for mediawiki-core* in JavaScript
> (or CoffeeScript, or...) now.  Or run PHP code inside Parsoid.  How could
> we use that?  (Could it grow developer communities?)
>
>
You're not talking about microservices here, so it's at least partially a
different discussion. You're talking about adding multiple languages into a
monolith and that's a path towards insanity. It's way easier to understand
and maintain large numbers of microservices than a polygot monolith. REST
with well defined APIs between services provides all of the same benefits
while also letting people manage their service independently, even with the
possibility of the service not being tied to MediaWiki or Wikimedia at all.

I'd posit that adding additional languages into the monolith will more
likely have the result of shrinking the developer community because it
requires knowledge of at least two languages to properly do development.

* How are parser extensions (like, say, WikiHiero, but there are lots of
> them) going to be managed in the long term?  There are three separate
> codebases to hook right now.  An extension like <gallery> might eventually
> need to hook the image thumbnail service, too.  Do we have a plan?
>
>
This seems like a perfect place for another microservice.


> And the pro/anti-npm and pro/anti-docker and pro/anti-VM discussion can go
> into one of those other tasks.  Thanks.
>
>
You're discussing packaging, distribution and running of services. So, I
don't believe they belong in another task. You're saying that alternatives
to your idea are only relevant when considered on their own, but these
alternatives are basically industry standards for the problem set as this
point and your proposal is something that only MediaWiki (and Wikimedia)
will be doing or maintaining.

- Ryan
_______________________________________________
Wikitech-l mailing list
[hidden email]
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Reply | Threaded
Open this post in threaded view
|

Re: [RFC/Summit] `npm install mediawiki-express`

Daniel Friesen-2
In reply to this post by Brion Vibber-4
On 2015-11-06 1:12 PM, Brion Vibber wrote:
> Of course there are interesting possibilities like using JS as a template
> module extension language in place of / addition to Lua. A general warning:
> as I understand the php-embed bridge, JS-side code would a) have full
> rights to the system within the user the daemon runs as, and b)
> exiting/failing out of node would kill the entire daemon.
node has a built in vm <https://nodejs.org/api/vm.html> module that is
regularly used to execute sandboxed js that doesn't have access to the
privileged node api. This code doesn't have access to `process.exit()`
and PHP's concept of fatal errors (in addition to thrown exceptions)
that immediately halt the process and can't be caught doesn't exist in
JS. Sandboxing against infinite loops could also be done by running the
sandbox in another process (child_process even has a high-level message
passing stream for communicating with a node js child process).

That all being said. I still think the original rationale for picking
lua (more sandboxing controls including execution limits based on steps
in lua rather than varying execution time) is still valid.

~Daniel Friesen (Dantman, Nadir-Seen-Fire) [http://danielfriesen.name/]

_______________________________________________
Wikitech-l mailing list
[hidden email]
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Reply | Threaded
Open this post in threaded view
|

Re: [RFC/Summit] `npm install mediawiki-express`

C. Scott Ananian
In reply to this post by Brion Vibber-4
On Fri, Nov 6, 2015 at 4:12 PM, Brion Vibber <[hidden email]> wrote:

> On Fri, Nov 6, 2015 at 11:13 AM, C. Scott Ananian <[hidden email]>
> wrote:
> > * Hey, we can have JavaScript and PHP running together in the same
> server.
> > Perhaps some persistence-related issues with PHP can be made easier?
> >
>
> We probably wouldn't want to break the PHP execution context concept that
> requests are self-contained (and failing out is reasonably safe). But you
> could for instance route sessions or cache data through the containing node
> server instead of on the filesystem or a separate memcache/etc service...
>

Right, exactly.  I'm currently running Opcache and APCu inside the embedded
PHP which are going to some lengths to offer persistent caches.  I'm not an
expert on PHP architecture; I suspect there are other places in mediawiki
where we are similarly jumping through hoops.  Perhaps these could be
simplified, at least for certain users.


> > * Hey, we can actually write *extensions for mediawiki-core* in
> JavaScript
> > (or CoffeeScript, or...) now.  Or run PHP code inside Parsoid.  How could
> > we use that?  (Could it grow developer communities?)
> >
>
> I'm uncertain about the desirability of general direct JS<->PHP sync call
> bridging, in that relying on it would _require_ this particular node+PHP
> distribution. I'd prefer loose enough coupling that the JS engine can be
> local or remote, and the PHP engine can be either Zend or HHVM, etc.
>

I expect that I can port php-embed to PHP 7 and/or HHVM without too much
trouble, if interest warrants.  And I already support quite a large number
of different node versions, from 2.4.0 to 5.0.0.  And there are some
interesting other alternative implementations that could export the same
interface but use RPC to bridge node and PHP, see for instance
https://github.com/bergie/dnode-php.  Even the sync/async distinction can
be bridged; if you look at the underlying implementation for php-embed all
communication is done via async message passing between the threads.  We
just "stop and wait" for certain replies to emulate sync calls (in
particular for PHP, which prefers it that way).


> Of course there are interesting possibilities like using JS as a template
> module extension language in place of / addition to Lua. A general warning:
> as I understand the php-embed bridge, JS-side code would a) have full
> rights to the system within the user the daemon runs as, and b)
> exiting/failing out of node would kill the entire daemon.
>

There is sandboxing within v8, so your warning is not accurate.

And in fact, the "mirror image" project is the PHP extension v8js, which I
believe Tim started and I worked on for a while before attempting
node-php-embed.  It also uses the native v8 sandboxing facilities.


> PHP-inside-Parsoid might be interesting for some kinds of extensions, but
> I'm not sure whether it's better to rig that up versus using looser
> coupling where we make an internal HTTP call over to the PHP MediaWiki
> side.
>

Yup.  That's what we essentially already do: after starting to implement
the template engine in Parsoid, it was scrapped and the entire templating
engine is implemented by calling over to PHP to allow it to expand
templates.  And whenever we want more information about the expansion, we
implement it in PHP.

But that's essentially the genesis of the "mediawiki as a collection of
services" idea -- once you start doing this, you find all sorts of bits of
crufty complex PHP code which you'd rather not try to reimplement.  First
templates, then image thumbnailing, next who knows, probably the skin.  One
day they might all be spun out as separate services with internal HTTP
calls between them.

I'm just providing a PoC that lets you ask questions about potential
alternatives.  I welcome the discussion.

> * How are parser extensions (like, say, WikiHiero, but there are lots of
> > them) going to be managed in the long term?  There are three separate
> > codebases to hook right now.  An extension like <gallery> might
> eventually
> > need to hook the image thumbnail service, too.  Do we have a plan?
> >
>
> This probably deserves its own thread!
>

Yeah.

Ideally you should only have to write one implementation, and it should be
> self-contained or access the container via a limited API.
>
> I'm not really sure I grasp how Parsoid handles tag hook extensions at the
> moment, actually... can anyone fill in some details?
>

It doesn't basically.  It just asks PHP to do the expansion for it, and
then wraps the whole thing in an element warning VE not to touch it.

Except for citations.

On our roadmap for this quarter we have a task to write a "proper"
extension interface, and then use it to refactor the citation code and
(hopefully) implement <gallery> support.  The end goal being to empower the
community to write Parsoid extensions for all the large number of *other*
tag extensions we don't yet support.

Note that Visual Editor needs to be extended at the same time as Parsoid is
extended, so they continue to recognize the same DOM structures, and to
make the resulting structures "easily editable" in some fashion..

The exact contours of all this is still a research project.  This work is,
again, a PoC to see if we could allow folks to package their Parsoid
extension as an NPM module, and perhaps include in the same module the
necessary VE code and hook it up.
  --scott

--
(http://cscott.net)
_______________________________________________
Wikitech-l mailing list
[hidden email]
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Reply | Threaded
Open this post in threaded view
|

Re: [RFC/Summit] `npm install mediawiki-express`

C. Scott Ananian
In reply to this post by Daniel Friesen-2
On Fri, Nov 6, 2015 at 4:52 PM, Daniel Friesen <[hidden email]>
wrote:

> That all being said. I still think the original rationale for picking
> lua (more sandboxing controls including execution limits based on steps
> in lua rather than varying execution time) is still valid.
>

It's not, actually.  It may have been at the time.  But v8 now has both
time and memory limits and fine-grained counters for various events with
callbacks and all sorts of crazy sandbox-y things.  See
https://github.com/phpv8/v8js/blob/master/v8js_timer.cc (although I think
the latest v8 actually has even more direct ways of enforcing these limits.)
 --scott, who'd like to get some more work done on Scribunto/JS at some
point.

--
(http://cscott.net)
_______________________________________________
Wikitech-l mailing list
[hidden email]
https://lists.wikimedia.org/mailman/listinfo/wikitech-l