Fwd: [Asis-l] new Journal: International Journal of Internet Research Ethics

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

Fwd: [Asis-l] new Journal: International Journal of Internet Research Ethics

phoebe ayers
New journal, perhaps of interest.

---------- Forwarded message ----------
From: Jeremy Hunsinger <[hidden email]>
Date: Mar 15, 2007 7:08 AM
Subject: [Asis-l] new Journal: International Journal of Internet Research Ethics
To: [hidden email]


Distribute as appropriate:
>
International Journal of Internet Research Ethics

http://www.uwm.edu/Dept/SOIS/cipr/ijire.html

Description and Scope:
The IJIRE is the first peer-reviewed online journal, dedicated
specifically to cross-disciplinary, cross-cultural research on
Internet Research Ethics.  All disciplinary perspectives, from those
in the arts and humanities, to the social, behavioral, and biomedical
sciences, are reflected in the journal.

With the emergence of Internet use as a research locale and tool
throughout the 1990s, researchers from disparate disciplines, ranging
from the social sciences to humanities to the sciences, have found a
new fertile ground for research opportunities that differ greatly
from their traditional biomedical counterparts.  As such,
"populations," locales, and spaces that had no corresponding physical
environment became a focal point, or site of research activity. Human
subjects protections questions then began to arise, across
disciplines and over time: What about privacy? How is informed
consent obtained? What about research on minors? What are "harms" in
an online environment? Is this really human subjects work? More
broadly, are the ethical obligations of researchers conducting
research online somehow different from other forms of research ethics
practices?

As Internet Research Ethics has developed as its own field and
discipline, additional questions have emerged: How do diverse
methodological approaches result in distinctive ethical conflicts –
and, possibly, distinctive ethical resolutions? How do diverse
cultural and legal traditions shape what are perceived as ethical
conflicts and permissible resolutions? How do researchers
collaborating across diverse ethical and legal domains recognize and
resolve ethical issues in ways that recognize and incorporate often
markedly different ethical understandings?

Finally, as "the Internet" continues to transform and diffuse, new
research ethics questions arise – e.g., in the areas of blogging,
social network spaces, etc. Such questions are at the heart of IRE
scholarship, and such general areas as anonymity, privacy, ownership,
authorial ethics, legal issues, research ethics principles (justice,
beneficence, respect for persons), and consent are appropriate areas
for consideration.

The IJIRE will publish articles of both theoretical and practical
nature to scholars from all disciplines who are pursuing—or reviewing—
IRE work.  Case studies of online research, theoretical analyses, and
practitioner-oriented scholarship that promote understanding of IRE
at ethics and institutional review boards, for instance, are
encouraged. Methodological differences are embraced.

_______________________________________________
Wiki-research-l mailing list
[hidden email]
http://lists.wikimedia.org/mailman/listinfo/wiki-research-l
Reply | Threaded
Open this post in threaded view
|

Database dump and script questions

Piotr Konieczny-2
Dear all,

I have a few questions about database dumps (I checked
http://meta.wikimedia.org/wiki/Data_dumps and it has no answers to
them). Perhaps you know the answer :)

First:
* is it possible to download a dump of only one page with history?
* is it possible to downliad a dump of only one (or selected) users
contributions?
* if not, is it possible to run some scripts/statistical analysis
without downloading the dump (100+ giga after decompressing looking at
the estimates, 99,9% of which I don't need for my study...)

Second:
* I am rather bad at writing scripts (basically, programming). And I
would like to do something similar to what Anthony et al. have done
('Explaining Quality in Internet Collective Goods: Zealots and Good
Samaritans in the Case of Wikipedia'), just limited to one article and
its contributors. What they have done - excerpt:

"For each contributor, we use the Wikipedia differencing algorithm3 to
compare the differences between three documents: (1) edit, the edit
submitted by the contributor, (2) previous, the version of the article
prior to the edit, and (3) current, the current version of the article
as it exists on the day the sample was drawn (...) We measure the
quality of an edit by calculating the number of characters from a
contributor’s edit that are retained in the current version, measured as
the percentage retained of the total number of characters in the entry
(retained in current/total in current)."

What I would like to do: run a script on a single article history and
contributions of its users to get 'retention values' for those users
edits on that article only AND on all of that user contribs in general.

If anybody knows of a script I could adapt for this purpose (or a place
to ask), I would be most greatful for information - writing one is
unfortunatly beyond my capabilities.

Thank you for your time,

--
Piotr Konieczny

"The problem about Wikipedia is, that it just works in reality, not in
theory."

_______________________________________________
Wiki-research-l mailing list
[hidden email]
http://lists.wikimedia.org/mailman/listinfo/wiki-research-l
Reply | Threaded
Open this post in threaded view
|

Re: Database dump and script questions

Garrett-8
You can download selected pages from the Special:Export of the wiki in question simply by entering all the page names to dump (each on its own line), including their talk pages and unchecking ". The problem is that theree will be a point where the dump will have a fatal error (and thus be basically useless) so your internet connection stability and the length of the page(s) being dumped will determine how much you can grab in one go before you hit an error. There is a way to extract edits in chunks which I think is explained on the page you linked to, but that's similarly troublesome. An additional problem is that from time to time the "current revision only" checkbox will be disabled for performance reasons meaning the only way to get the pages is from the monthly dumps themselves.

It's not possible to dump only one user's contributions; typically, most of their edits will be built upon the contributions of others, and removing those others from the edit history would be a breach of the GNU Free Documentation License stipulations, so this is the likely reason for the lack of such a feature (although a script could likely work around this).

As for the scripts that's not something I can help you with. Anyway, I hope I've answered at least some of your questions. :)
Garrett

On 16/03/07, Piotr Konieczny <[hidden email]> wrote:
Dear all,

I have a few questions about database dumps (I checked
http://meta.wikimedia.org/wiki/Data_dumps and it has no answers to
them). Perhaps you know the answer :)

First:
* is it possible to download a dump of only one page with history?
* is it possible to downliad a dump of only one (or selected) users
contributions?
* if not, is it possible to run some scripts/statistical analysis
without downloading the dump (100+ giga after decompressing looking at
the estimates, 99,9% of which I don't need for my study...)

Second:
* I am rather bad at writing scripts (basically, programming). And I
would like to do something similar to what Anthony et al. have done
('Explaining Quality in Internet Collective Goods: Zealots and Good
Samaritans in the Case of Wikipedia'), just limited to one article and
its contributors. What they have done - excerpt:

"For each contributor, we use the Wikipedia differencing algorithm3 to
compare the differences between three documents: (1) edit, the edit
submitted by the contributor, (2) previous, the version of the article
prior to the edit, and (3) current, the current version of the article
as it exists on the day the sample was drawn (...) We measure the
quality of an edit by calculating the number of characters from a
contributor's edit that are retained in the current version, measured as
the percentage retained of the total number of characters in the entry
(retained in current/total in current)."

What I would like to do: run a script on a single article history and
contributions of its users to get 'retention values' for those users
edits on that article only AND on all of that user contribs in general.

If anybody knows of a script I could adapt for this purpose (or a place
to ask), I would be most greatful for information - writing one is
unfortunatly beyond my capabilities.

Thank you for your time,

--
Piotr Konieczny

"The problem about Wikipedia is, that it just works in reality, not in
theory."

_______________________________________________
Wiki-research-l mailing list
[hidden email]
http://lists.wikimedia.org/mailman/listinfo/wiki-research-l


_______________________________________________
Wiki-research-l mailing list
[hidden email]
http://lists.wikimedia.org/mailman/listinfo/wiki-research-l