[Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

classic Classic list List threaded Threaded
25 messages Options
12
Reply | Threaded
Open this post in threaded view
|

[Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

James Salsman-2
With the NSA revelations over the past months, there has been some very
questionable information starting to circulate suggesting that trying to
implement perfect forward secrecy for https web traffic isn't worth the
effort. I am not sure of the provenance of these reports, and I would like
to see a much more thorough debate on their accuracy or lack thereof. Here
is an example:

http://tonyarcieri.com/imperfect-forward-secrecy-the-coming-cryptocalypse

As my IETF RFC coauthor Harald Alvestrand told me: "The stuff about 'have
to transmit the session key I the clear' is completely bogus, of course.
That's what Diffie-Hellman is all about."

Ryan Lane tweeted yesterday: "It's possible to determine what you've been
viewing even with PFS. And no, padding won't help." And he wrote on today's
Foundation blog post, "Enabling perfect forward secrecy is only useful if
we also eliminate the threat of traffic analysis of HTTPS, which can be
used to detect a user’s browsing activity, even when using HTTP," citing
http://blog.ioactive.com/2012/02/ssl-traffic-analysis-on-google-maps.html

It is not at all clear to me that discussion pertains to PFS or Wikimedia
traffic in any way.

I strongly suggest that the Foundation contract with well-known independent
reputable cryptography experts to resolve these questions. Tracking and
correcting misinformed advice, perhaps in cooperation with the EFF, is just
as important.
_______________________________________________
Wikimedia-l mailing list
[hidden email]
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, <mailto:[hidden email]?subject=unsubscribe>
Reply | Threaded
Open this post in threaded view
|

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

Ryan Lane-3
On Thu, Aug 1, 2013 at 1:33 PM, James Salsman <[hidden email]> wrote:

> With the NSA revelations over the past months, there has been some very
> questionable information starting to circulate suggesting that trying to
> implement perfect forward secrecy for https web traffic isn't worth the
> effort. I am not sure of the provenance of these reports, and I would like
> to see a much more thorough debate on their accuracy or lack thereof. Here
> is an example:
>
> http://tonyarcieri.com/imperfect-forward-secrecy-the-coming-cryptocalypse
>
> As my IETF RFC coauthor Harald Alvestrand told me: "The stuff about 'have
> to transmit the session key I the clear' is completely bogus, of course.
> That's what Diffie-Hellman is all about."
>
> Ryan Lane tweeted yesterday: "It's possible to determine what you've been
> viewing even with PFS. And no, padding won't help." And he wrote on today's
> Foundation blog post, "Enabling perfect forward secrecy is only useful if
> we also eliminate the threat of traffic analysis of HTTPS, which can be
> used to detect a user’s browsing activity, even when using HTTP," citing
> http://blog.ioactive.com/2012/02/ssl-traffic-analysis-on-google-maps.html
>
> It is not at all clear to me that discussion pertains to PFS or Wikimedia
> traffic in any way.
>
> I strongly suggest that the Foundation contract with well-known independent
> reputable cryptography experts to resolve these questions. Tracking and
> correcting misinformed advice, perhaps in cooperation with the EFF, is just
> as important.
>

Well, my post was reviewed by quite a number of tech staff and no one
rebutted my claim.

Assuming traffic analysis can be used to determine your browsing habits as
they are occurring (which is likely not terribly hard for Wikipedia) then
there's no point in forward secrecy because there's no point in decrypting
the traffic. It would protect passwords, but people should be changing
their passwords occasionally anyway, right?

Using traffic analysis it's also likely possible to correlate edits with
users as well, based on timings of requests and the public data available
for revisions.

I'm not saying that PFS is worthless, but I am saying that implementing PFS
without first solving the issue of timing and traffic analysis
vulnerabilities is a waste of our server's resources.

- Ryan
_______________________________________________
Wikimedia-l mailing list
[hidden email]
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, <mailto:[hidden email]?subject=unsubscribe>
Reply | Threaded
Open this post in threaded view
|

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

James Salsman-2
In reply to this post by James Salsman-2
Ryan Lane wrote:
>...
> Assuming traffic analysis can be used to determine your browsing
> habits as they are occurring (which is likely not terribly hard for Wikipedia)

The Google Maps example you linked to works by building a huge
database of the exact byte sizes of satellite image tiles. Are you
suggesting that we could fingerprint articles by their sizes and/or
the sizes of the images they load?

But if so, in your tweet you said padding wouldn't help. But padding
would completely obliterate that size information, wouldn't it?

_______________________________________________
Wikimedia-l mailing list
[hidden email]
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, <mailto:[hidden email]?subject=unsubscribe>
Reply | Threaded
Open this post in threaded view
|

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

Ryan Lane-3
On Thursday, August 1, 2013, James Salsman wrote:

> Ryan Lane wrote:
> >...
> > Assuming traffic analysis can be used to determine your browsing
> > habits as they are occurring (which is likely not terribly hard for
> Wikipedia)
>
> The Google Maps example you linked to works by building a huge
> database of the exact byte sizes of satellite image tiles. Are you
> suggesting that we could fingerprint articles by their sizes and/or
> the sizes of the images they load?
>

Of course. They can easily crawl us, and we provide everything for
download. Unlike sites like facebook or google, our content is delivered
exactly the same to nearly every user.

>

> But if so, in your tweet you said padding wouldn't help. But padding
> would completely obliterate that size information, wouldn't it?
>
>
Only Opera has pipelining enabled, so resource requests are serial. Also,
our resources are delivered from a number of urls (upload, bits, text)
making it easier to identify resources. Even with padding you can take the
relative size of resources being delivered, and the order of those sizes
and get a pretty good idea of the article being viewed. If there's enough
data you may be able to identify multiple articles and see if the
subsequent article is a link from the previous article, making guesses more
accurate. It only takes a single accurate guess for an edit to identify an
editor and see their entire edit history.

Proper support of pipelining in browsers or multiplexing in protocols like
SPDY would help this situation. There's probably a number of things we can
do to improve the situation without pipelining or newer protocols, and
we'll likely put some effort into this front. I think this takes priority
over PFS as PFS isn't helpful if decryption isn't necessary to track
browsing habits.

Of course the highest priority is simply to enable HTTPS by default, as it
forces the use of traffic analysis or decryption, which is likely a high
enough bar to hinder tracking efforts for a while.

- Ryan
_______________________________________________
Wikimedia-l mailing list
[hidden email]
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, <mailto:[hidden email]?subject=unsubscribe>
Reply | Threaded
Open this post in threaded view
|

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

George William Herbert



On Aug 1, 2013, at 10:07 PM, Ryan Lane <[hidden email]> wrote:

> Also,
> our resources are delivered from a number of urls (upload, bits, text)
> making it easier to identify resources. Even with padding you can take the
> relative size of resources being delivered, and the order of those sizes
> and get a pretty good idea of the article being viewed. If there's enough
> data you may be able to identify multiple articles and see if the
> subsequent article is a link from the previous article, making guesses more
> accurate. It only takes a single accurate guess for an edit to identify an
> editor and see their entire edit history.
>
> Proper support of pipelining in browsers or multiplexing in protocols like
> SPDY would help this situation. There's probably a number of things we can
> do to improve the situation without pipelining or newer protocols, and
> we'll likely put some effort into this front. I think this takes priority
> over PFS as PFS isn't helpful if decryption isn't necessary to track
> browsing habits.


This needs some proper crypto expert vetting, but...

It would be trivial (both in effort and impact on customer bandwidth) to pad everything to a 1k boundary on https transmission once we get there.  A variable length non-significant header field can be used.  Forcing such size counts into very large bins will degrade fingerprinting significantly.

It would also not be much more effort or customer impact to pad to the next larger 1k size for a random large fraction of transmissions.  One could imagine a user setting where one could opt in or out of that, for example, and perhaps a set of relative inflation scheme sizes one could choose from (10% inflated, 25% inflated, 50%, 50% plus 10% get 1-5 more k of padding, ...).

Even the slightest of these options (under https everywhere) starts to give plausible deniability to someone's browsing; the greater ones would make fingerprinting quite painful, though running a statistical exercise of such options to see how hard it would make it seems useful to understand the effects...

The question is, what is the point of this?  Provide very strong user obfuscation?  Provide at least minimal individual evidentiary obfuscation from the level of what a US court (for example) might consider scientifically reliable, to block use of that history in trials (even if educated guesses still might be made by law enforcement as to the articles)?

Countermeasures are responses to attain specific goals.  What are the goals people care about for such a program, and what are the Foundation willing to consider worth supporting with bandwidth $$ or programmer time?  How do we come up with a list of possible goals and prioritize amongst them in both a technical and policy/goals sense?

I believe that PFS will come out higher here as it's cost is really only CPU crunchies and already existent software settings to choose from, and its benefits to long term total obscurability are significant if done right.

No quantity of countermeasures beat inside info, and out-of-band compromise of our main keys ends up being attractive enough as the only logical attack once we start down this road at all past HTTPS-everywhere.  One time key compromise is far more likely than realtime compromise of PFS keys as they rotate, though even that is possible given sufficiently motivated successful stealthy subversion.  The credible ability to in the end be confident that's not happening is arguably the long term ceiling for how high we can realistically go with countermeasures, and contains operational security and intrusion detection features as its primary limits rather than in-band behavior.

At some point the ops team would need a security team, an IDS team, and a counterintelligence team to watch the other teams, and I don't know if the Foundation cares that much or would find operating that way to be a more comfortable moral and practical stance...


George William Herbert
Sent from my iPhone


_______________________________________________
Wikimedia-l mailing list
[hidden email]
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, <mailto:[hidden email]?subject=unsubscribe>
Reply | Threaded
Open this post in threaded view
|

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

James Salsman-2
In reply to this post by James Salsman-2
George William Herbert wrote:
>...
> It would also not be much more effort or customer impact
> to pad to the next larger 1k size for a random large fraction
> of transmissions.

Padding each transmission with a random number of bytes, up to say 50
or 100, might provide a greater defense against fingerprinting while
saving massive amounts of bandwidth.

>... At some point the ops team would need a security team,
> an IDS team, and a counterintelligence team to watch the
> other teams, and I don't know if the Foundation cares that
> much or would find operating that way to be a more
> comfortable moral and practical stance...

I'm absolutely sure that they do care enough to get it right, but I
think that approach might be overkill. Just one or two cryptology
experts to make the transition to HTTPS, PFS, and whatever padding is
prudent would really help. I also hope that, if there is an effort to
spread disinformation about the value of such techniques, that the
Foundation might consider joining with e.g. the EFF to help fight it.
I think it's likely that a single cryptology consultant would probably
be able to make great progress in both. Getting cryptography right
isn't so much as a time-intensive task as it is sensitive to
experience and training.

Setting up and monitoring with ongoing auditing can often be
automated, but does require the continued attention of at least one
highly skilled expert, and preferably more than one in case the first
one gets hit by a bus.

_______________________________________________
Wikimedia-l mailing list
[hidden email]
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, <mailto:[hidden email]?subject=unsubscribe>
Reply | Threaded
Open this post in threaded view
|

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

Anthony-73
On Fri, Aug 2, 2013 at 1:32 PM, James Salsman <[hidden email]> wrote:

> George William Herbert wrote:
> >...
> > It would also not be much more effort or customer impact
> > to pad to the next larger 1k size for a random large fraction
> > of transmissions.
>
> Padding each transmission with a random number of bytes, up to say 50
> or 100, might provide a greater defense against fingerprinting while
> saving massive amounts of bandwidth.
>

Or it might provide virtually no defense and not save any bandwidth.
_______________________________________________
Wikimedia-l mailing list
[hidden email]
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, <mailto:[hidden email]?subject=unsubscribe>
Reply | Threaded
Open this post in threaded view
|

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

Marc-Andre
In reply to this post by James Salsman-2
On 08/02/2013 01:32 PM, James Salsman wrote:
> Padding each transmission with a random number of bytes, up to say 50
> or 100, might provide a greater defense against fingerprinting while
> saving massive amounts of bandwidth.

It would slightly change the algorithm used to make the fingerprint, not
make it any significantly higher, and you'd want to have some fuzz in
the match process anyways since you wouldn't necessarily want to have to
fiddle with your database at every edit.

The combination of "at least this size" with "at least that many
secondary documents of at least those sizes in that order" is probably
sufficient to narrow the match to a very tiny minority of articles.
You'd also need to randomize delays, shuffle load order, load blinds,
etc.  A minor random increase of size in document wouldn't even slow
down the process.

-- Marc


_______________________________________________
Wikimedia-l mailing list
[hidden email]
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, <mailto:[hidden email]?subject=unsubscribe>
Reply | Threaded
Open this post in threaded view
|

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

Anthony-73
How much padding is already inherent in HTTPS?  Does the protocol pad to
the size of the blocks in the block cipher?

Seems to me that any amount of padding is going to give little bang for the
buck, at least without using some sort of pipelining.  You could probably
do quite a bit if you redesigned Mediawiki from scratch using all those
newfangled asynchronous javascript techniques, but that's not exactly an
easy task.  :)


On Fri, Aug 2, 2013 at 3:45 PM, Marc A. Pelletier <[hidden email]> wrote:

> On 08/02/2013 01:32 PM, James Salsman wrote:
> > Padding each transmission with a random number of bytes, up to say 50
> > or 100, might provide a greater defense against fingerprinting while
> > saving massive amounts of bandwidth.
>
> It would slightly change the algorithm used to make the fingerprint, not
> make it any significantly higher, and you'd want to have some fuzz in
> the match process anyways since you wouldn't necessarily want to have to
> fiddle with your database at every edit.
>
> The combination of "at least this size" with "at least that many
> secondary documents of at least those sizes in that order" is probably
> sufficient to narrow the match to a very tiny minority of articles.
> You'd also need to randomize delays, shuffle load order, load blinds,
> etc.  A minor random increase of size in document wouldn't even slow
> down the process.
>
> -- Marc
>
>
> _______________________________________________
> Wikimedia-l mailing list
> [hidden email]
> Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
> <mailto:[hidden email]?subject=unsubscribe>
>
_______________________________________________
Wikimedia-l mailing list
[hidden email]
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, <mailto:[hidden email]?subject=unsubscribe>
Reply | Threaded
Open this post in threaded view
|

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

James Salsman-2
In reply to this post by James Salsman-2
Marc A. Pelletier wrote:
>...
> A minor random increase of size in document wouldn't even slow
> down [fingerprinting.]

That's absolutely false. The last time I measured the sizes of all
9,625 vital articles, there was only one at the median length of
30,356 bytes but four articles up to 50 bytes larger. Scale that up to
4,300,000 articles, and are you suggesting anyone is seriously going
to try fingerprinting secondary characteristics for buckets of 560
articles? It would not only slow them down, it would make their false
positive rate useless.

This is why we need cryptography experts instead of laypeople making
probabilistic inferences on Boolean predicates.

Marc, I note that you have recommending not keeping the Perl CPAN
modules up to date on Wikimedia Labs:
http://www.mediawiki.org/w/index.php?title=Wikimedia_Labs/Tool_Labs/Needed_Toolserver_features&diff=678902&oldid=678746
saying that out of date packages are the "best tested" when in fact
almost all CPAN packages have their own unit tests. That sort of
reasoning is certain to allow known security vulnerabilities to
persist when they could easily be avoided.

Anthony wrote:
>
> How much padding is already inherent in HTTPS?

None, which is why Ryan's Google Maps fingerprinting example works.

>... Seems to me that any amount of padding is going to give little
> bang for the buck....

Again, can we please procure expert opinions instead of relying on the
existing pool of volunteer and staff opinions, especially when there
is so much FUD prevalent discouraging the kinds of encryption which
would most likely strengthen privacy?

_______________________________________________
Wikimedia-l mailing list
[hidden email]
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, <mailto:[hidden email]?subject=unsubscribe>
Reply | Threaded
Open this post in threaded view
|

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

Matthew Flaschen-2
On 08/02/2013 05:06 PM, James Salsman wrote:
> Marc, I note that you have recommending not keeping the Perl CPAN
> modules up to date on Wikimedia Labs:
> http://www.mediawiki.org/w/index.php?title=Wikimedia_Labs/Tool_Labs/Needed_Toolserver_features&diff=678902&oldid=678746
> saying that out of date packages are the "best tested" when in fact
> almost all CPAN packages have their own unit tests. That sort of
> reasoning is certain to allow known security vulnerabilities to
> persist when they could easily be avoided.

Besides being from a few months ago, and unrelated to this conversation,
I think that's a mis-characterization of what he said.

He said in general he would lean towards "keeping the distribution's
versions since those are the better tested ones", but noted it should be
looked at on a "package-by-package basis", and that "there may well be
good reasons to bump up to a more recent version" (a security
vulnerability that the distro isn't fixing rapidly enough would be such
a reason).

It seems from the context "better tested" meant something like "people
are using this in practice in real environments", not only automated
testing.

Matt Flaschen

_______________________________________________
Wikimedia-l mailing list
[hidden email]
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, <mailto:[hidden email]?subject=unsubscribe>
Reply | Threaded
Open this post in threaded view
|

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

Marc-Andre
On 08/02/2013 05:50 PM, Matthew Flaschen wrote:
> It seems from the context "better tested" meant something like "people
> are using this in practice in real environments", not only automated
> testing.

And, indeed, given the constraints and objectives of the Tool Labs
(i.e.: no secrecy, all open source and data, high reliability), the more
important concern is "tested to be robust"; I'd deviate from
distribution packaging in the case where a security issue could lead to
escalation, but concerns about data leaks are not an issue.

And whilst I am not a cryptography expert (depending, I suppose, how to
define "expert") I happen to be very well versed in security protocol
design and zero-information analysis (but lack the math acument for
cryptography proper so I have to trust the Blums and Shamirs of this
world at their word).

For what concerns us here in traffic analysis, TLS is almost entirely
worthless *on its own*.  It is a necessary step, and has a great number
of /other/ benefits that justify its deployment without having anything
to do with the NSA's snooping.  I was not making an argument against it.

What I /am/ saying, OTOH, is that random padding without (at least)
pipelining and placards *is* worthless to protect against traffic
analysis since any reliable method to do it would be necessarily robust
against deviation in size.  Given that it has a cost to implement and
maintain, and consumes resources, it would be counterproductive to do
that.  It would give false reassurance of higher security without
actually bringing any security benefit.  I.e.: theatre.

-- Marc


_______________________________________________
Wikimedia-l mailing list
[hidden email]
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, <mailto:[hidden email]?subject=unsubscribe>
Reply | Threaded
Open this post in threaded view
|

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

James Salsman-2
In reply to this post by James Salsman-2
>... random padding without (at least) pipelining and
> placards *is* worthless to protect against traffic analysis

No, that is not true, and
http://www.ieee-security.org/TC/SP2012/papers/4681a332.pdf
explains why. Padding makes it difficult but not impossible to distinguish
between two HTTPS destinations. 4,300,000 destinations is right out.

> since any reliable method to do it would be necessarily robust
> against deviation in size....

That's like saying any reliable method to solve satisfiability in
polynomial time would be necessarily robust against variations in the
number of terms per expression. It's not even wrong.

When is the Foundation going to obtain the expertise to protect readers
living under regimes which completely forbid HTTPS access to Wikipedia,
like China? I suppose I better put that bug about steganography for the
surveillance triggers from TOM-Skype in bugzilla. I wish that could have
happened before everyone goes to Hong Kong.
_______________________________________________
Wikimedia-l mailing list
[hidden email]
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, <mailto:[hidden email]?subject=unsubscribe>
Reply | Threaded
Open this post in threaded view
|

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

Marc-Andre
On 08/02/2013 08:15 PM, James Salsman wrote:
> No, that is not true, and
> http://www.ieee-security.org/TC/SP2012/papers/4681a332.pdf
> explains why. Padding makes it difficult but not impossible to distinguish
> between two HTTPS destinations. 4,300,000 destinations is right out.

... have you actually /read/ that paper? Not only does it discuss how
naive countermeasures like you suggest aren't even able to protect
against identification at that coarse level, they are presuming much
*less* available data to make a determination than what is readily
available from visiting /one/ article (let alone what extra information
you can extract from one or two consecutive articles because of the
correlation provided by the links).

Traffic analysis is a hard attack to protect against, and just throwing
random guesses at what makes it harder is not useful (and yes, padding
is just a random guess that is /well known/ in the litterature to not
help against TA despite its benefits in certain kinds of known plaintext
and feedback ciphers).

I recommend you read ''Secure Transaction Protocol Analysis: Models and
Applications'', by Chen et al (ISBN 9783540850731).  It's already a
little out of date and a bit superficial, but will give you a good basic
working knowledge of the problem set and some viable approaches to the
subject.

-- Marc


_______________________________________________
Wikimedia-l mailing list
[hidden email]
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, <mailto:[hidden email]?subject=unsubscribe>
Reply | Threaded
Open this post in threaded view
|

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

James Salsman-2
In reply to this post by James Salsman-2
Marc A. Pelletier wrote:
>...
>> http://www.ieee-security.org/TC/SP2012/papers/4681a332.pdf
>...
> have you actually /read/ that paper?

Of course I have. Have you read the conclusions at the bottom right of page
344? What kind of an adversary trying to infer our readers' article
selections is going to be able to use accuracy 10% better than a coin flip?
The National Pointless Trial Attorney's Employment Security Agency?
_______________________________________________
Wikimedia-l mailing list
[hidden email]
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, <mailto:[hidden email]?subject=unsubscribe>
Reply | Threaded
Open this post in threaded view
|

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

Anthony-73
In reply to this post by James Salsman-2
> Anthony wrote:
> >
> > How much padding is already inherent in HTTPS?
>
> None, which is why Ryan's Google Maps fingerprinting example works.
>

Citation needed.


> >... Seems to me that any amount of padding is going to give little
> > bang for the buck....
>
> Again, can we please procure expert opinions instead of relying on the
> existing pool of volunteer and staff opinions, especially when there
> is so much FUD prevalent discouraging the kinds of encryption which
> would most likely strengthen privacy?


Feel free.  But don't talk about what is most likely if you're not
interested in being told that you're wrong.
_______________________________________________
Wikimedia-l mailing list
[hidden email]
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, <mailto:[hidden email]?subject=unsubscribe>
Reply | Threaded
Open this post in threaded view
|

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

Anthony-73
On Fri, Aug 2, 2013 at 10:07 PM, Anthony <[hidden email]> wrote:

>
> Anthony wrote:
>> >
>> > How much padding is already inherent in HTTPS?
>>
>> None, which is why Ryan's Google Maps fingerprinting example works.
>>
>
> Citation needed.
>

Also please address
https://en.wikipedia.org/wiki/Block_cipher_modes_of_operation#Padding

It seems that the ciphers which run in CBC mode, at least, are padded.
 Wikipedia currently seems to be set to use RC4 128.  I'm not sure what, if
any, padding is used by that cipher.  But presumably Wikipedia will switch
to a better cipher if Wikimedia cares about security.
_______________________________________________
Wikimedia-l mailing list
[hidden email]
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, <mailto:[hidden email]?subject=unsubscribe>
Reply | Threaded
Open this post in threaded view
|

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

James Salsman-2
In reply to this post by James Salsman-2
> please address
https://en.wikipedia.org/wiki/Block_cipher_modes_of_operation#Padding

Sure. As soon as someone creates
http://en.wikipedia.org/wiki/Sunset_Shimmerso I can use an appropriate
example.
_______________________________________________
Wikimedia-l mailing list
[hidden email]
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, <mailto:[hidden email]?subject=unsubscribe>
Reply | Threaded
Open this post in threaded view
|

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

Anthony-73
Google also seems to be using RC4 128, so that explains why there's no
padding by default there.

RC4 is a stream cipher.  The more secure ciphers are (all?) block ciphers.

"A block cipher <https://en.wikipedia.org/wiki/Block_cipher> works on units
of a fixed size
<https://en.wikipedia.org/wiki/Block_size_(cryptography)> (known
as a *block size*), but messages come in a variety of lengths. So some
modes (namely ECB<https://en.wikipedia.org/wiki/Block_cipher_modes_of_operation#Electronic_codebook_.28ECB.29>
 and CBC <https://en.wikipedia.org/wiki/Cipher_block_chaining>) require
that the final block be padded before encryption."


On Fri, Aug 2, 2013 at 10:42 PM, James Salsman <[hidden email]> wrote:

> > please address
> https://en.wikipedia.org/wiki/Block_cipher_modes_of_operation#Padding
>
> Sure. As soon as someone creates
> http://en.wikipedia.org/wiki/Sunset_Shimmerso I can use an appropriate
> example.
> _______________________________________________
> Wikimedia-l mailing list
> [hidden email]
> Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
> <mailto:[hidden email]?subject=unsubscribe>
>
_______________________________________________
Wikimedia-l mailing list
[hidden email]
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, <mailto:[hidden email]?subject=unsubscribe>
Reply | Threaded
Open this post in threaded view
|

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

James Salsman-2
In reply to this post by James Salsman-2
Anthony, padding in this context means adding null or random bytes to the
end of encrypted TCP streams in order to obscure their true length. The
process of adding padding is entirely independent of the choice of
underlying cipher.

In this case, however, we have been discussing perfect forward secrecy,
which is dependent on the particular cypher. ECDHE-RSA-RC4-SHA is an
example of a PFS cipher and TLS key exchange protocol choice widely
supported by Apache supporting PFS.

The English Wikipedia articles on these subjects are all mostly
start-class, so please try Google, Google Scholar, and WP:RX for more
information.
_______________________________________________
Wikimedia-l mailing list
[hidden email]
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, <mailto:[hidden email]?subject=unsubscribe>
12