Talk:Papers:Sending the Right Signals
Contents
- 1 beltzner, Jan 25th, 3am PST
- 2 Hecker 09:24, 25 Jan 2006 (PST)
- 3 shaver
- 4 from call
- 5 dria
- 6 Random thoughts from a caffeine-addled mind (Johnath)
- 7 Think transparency/'scientific method':Present conclusions/judgements about trustworthinesswith reasoning, backed by evidence, that users can reproduce.
- 8 iang 16 Feb 2006 (PST)
beltzner, Jan 25th, 3am PST
- first draft completed and ready for comment
- feel free to make grammatical/spelling edits inline, I haven't bothered to do that check yet
- if you find it easier to insert comments in the text itself, again, feel free
- with screenshots, this will be 4 pages in length
- I used "trustworthiness" a lot, instead of "authentication"; is that OK?
- dbaron said that the goal of these meetings was to generate discussion, and that was the approach I took as opposed to attempting to design the perfect UI, since the underlying technologies aren't neccessarily finalized, and since it's not yet determined if there will be multi-level authentication or preferred CAs. As a result, I tried to keep the position paper at a general level, commenting on what's needed as criteria for success for any solution. That said, I tacked on some simple proposals at the end :)
- On a re-read, I think I might need to tie together the idea that I'm approaching this from a "how do we make online authentication as close as possible to the real world equivalent" or "what can we learn about how we already make these judgements in order to apply that to the UI" perspective.
- I'm rambling, aren't I? I'll stop now.
Hecker 09:24, 25 Jan 2006 (PST)
This is a useful beginning. Some quick comments:
- Using "trustworthiness" and similar terms is I think OK, as long as you are taking the perspective of the end user, who ultimately is the one making the decision on whether a particular service can be trusted (as in your RL examples).
- You write, "A connection to an entity should be said to be 'secure' when the connection is encrypted and it can be reasonably assured that communication is restricted to the user and the entity." One key question is, what does "reasonably assured" mean in this context? For example, by one interpretation connections made using self-signed certificates could be referred to as "secure", at least if there is some reason to believe that the certificate in question is in fact associated with the entity in question. (For example, the self-signed cert may have been exchanged out-of-band, or the user may have identified it as being associated with the entity based on other signals.) Another key question is, what does "entity" mean in this context? For example, some might interpret 'entity' as referring to the web site itself (i.e., a web server accessible at the particular domain name) and others might interpret 'entity' as referring to the web site operator (i.e., an identified individual or organization).
- In general I prefer using the phrase "identified by" to "signed by". My only caveat is that it doesn't read as smoothly in cases where the certificate is associated with a domain name rather than an individual's or organization's name.
A general comment: Let's make sure that this discussion doesn't solely focus on the SSL UI and related PKI-enabled features. In my opinion in the context of the overall problem PKI/crypto are both overkill, because of the technical complexity underlying PKI/crypto-related features, and also at the same time don't necessarily address the problem at hand, because of the ambiguity underlying what PKI purportedly attempts to do.
I think that the "signals" approach is the best way to go: Figure out what signals can provide relevant information to the user who's trying to make a trust determination, and come up with a reasonably unified way to present such signals to the user. The signals themselves could be based on various back-end mechanisms: real-time site checks (as in the various anti-phishing toolbars), crypto-based mechanisms like X.509v3 certs or PGP-style signing, and so on. The front-end UI should be designed to accommodate multiple such mechanisms, both present and future, with such mechanisms either built-in to the core products or addable via extensions.
shaver
In the case of a FOAF or web-of-trust signal, there's no "organization that accepts responsibility for that judgement".
There isn't much in here about letting people federate credentials, reuse them with different sites, or any piece of the authentication/trust space other than "helping users know if they should trust sites". I think the other side of the equation is very important too, perhaps obviously. ("How does Jane know which card the ATM will accept? How does the ATM verify her card and account?")
from call
- the last section is the weakest, and narrows the discussion back to SSL style security which is orthogonal to the rest of the discussion
- perhaps remove it, or generalize it out some more
- ignores non-browser clients (email, news aggregators, calendar, etc.)
- keep the focus on framing the discussion instead of coming up with prematurely-specified proposals
dria
- Minor quibble: you use "entity" and "object" interchangeably throughout. I found it slightly jarring. I'll leave it to you to decide whether this is actually a valid complaint :)
- The paragraphs about 'consistency' and 'clarity' are a bit jumbled. You start talking about consistency then sort of fade into talking about clarity...maybe talk about why clarity is important earlier in the piece before stating our position (which is about consistency), then just focus on why consistency is awesome in the "Position" section?
Random thoughts from a caffeine-addled mind (Johnath)
- Agree with posters above, SSL has a lot of history to it, and many smart people have put their thinking into the current certificate/signing authority model of things, but why not keep things open for the moment -- security folk don't often have a usability focus, so their models might be provably more secure, but by failing to consider the humans involved, become less secure through misuse and ignorance. Indeed this is really what the whole discussion is about, so I'll stop talking now.
- "Recommended" is a word (from the last section) that will likely give you trouble, since no one wants to put their ass on the line, much less have a third party web browser do it on their behalf. Verisign sure as hell doesn't want to be sued when you don't get your ebay item, and the participants of a web of trust aren't really recommending that YOU buy any of the products at undergroundpharmacy.net, even if they have had reason to trust the operators themselves, in the past. I know you're trying to de-jargon things because "recommended" is better than "identity verified by X509 cert" but "recommended" is jargon too, it's legal jargon for "sue me."
- Consider that the real-world analogues for mimic sites used in phishing attacks are things like mag stripe "skimmers." One problem people have online is that they don't know which sites to trust because there isn't a century or twelve of brand development yet, but the reason skimmers work in real life is because they hijack all the signals of trustworthiness (heavy doors, cameras, etc) and that's how mimic sites work. Nothing is ever new, so maybe there is something to be learned from how real world companies cope with skimming (how do they, I wonder?)
- Bruce Schneier. Is anyone from your camp talking to him yet? Buy him 7 comely lasses of virtue true if you have to - he is a guy whose brain you want in on this, I would think.
- Has an effort been made to scope this? How good do we want to make things? Is it okay if the online experience is as trustworthy as, say, mail order ads in the back of magazines? Does it have to be as trustworthy as a convenience store bulk bin? Does it have to be as trustworthy as a bank? Security is usually a tradeoff against ease of use, so how much is enough?
- The article as is talks about three concepts: encryption, authentication, and recommendation. I would argue that while, on the one hand, we want to capture the language of recommendation because that's what users want to hear (and it's *ALL* users want to hear: "Yes, smart people have made sure that this site is fine. Go ahead.") the problem is that neither encryption nor authentication (nor both!) provide that, indeed none of the current stack does. And maybe firefox/w3 doesn't want to be in that business anyhow. On the other hand, it is clearly the browser's business, as conduit between user and site, to inform the user about the encryption and authentication state of their interactions. Maybe you want to, again in the interest of using words people are comfortable with, talk about a conversation being "private" or not (encryption) and a site being "identified" or not (authentication). I think people understand that calling a conversation "private" doesn't have implications as to whether it's a safe transaction or not - "secure"/"safe" concepts do carry those connotations. A "private" conversation with Vinny "The Ladykiller" Corleone is not necessarily a "secure" one.
That's all until I come up with more. Glad to see this work happening.
Edit: Urr - whups - don't know why I'm a sub-section of dria's comments. Ah well.
Think transparency/'scientific method':
Present conclusions/judgements about trustworthiness
with reasoning, backed by evidence, that users can reproduce.
Accountability is the ability to account for your conclusions in ways so others can reproduce them.
Technology which judges some communications as "trusted", "recommended", "suspicious", "suspected", "untrustworthy", "might be a scam", etc. needs to explain the basis for this judgement. We cannot rely on technology as a perfect oracle of trustworthiness, and therefore we must look into the reasoning behind its judgements just as we do IRL, and make our own judgements about when to trust the technology and when to discount its conclusions.
IN STORY LIFE: Suppose Jane is considering product X, Ken tells Jane
what he thinks about X. Jane can ask Ken how he came to his
judgements, and what evidence he has. Jane can then judge the risk
for herself.
Suppose Ken says he thinks X is a scam and Jane finds a reason is that Ken had one bad experience against vendor of X with a different product, and from that always recommends against the vendor. Jane now knows that Ken's reasoning applies to all his judgements about products of vendor X, but may not generalize to her, and not to his judgements about other vendors. Therefore if she has heard good things elsewhere, she may discount Ken's opinions about vendor X, but still ask his opinion about other vendors.
On the other hand if Ken works in a repair shop and knows that X frequently breaks down, then Jane can raise the weight of Ken's opions.
Suppose Ken says he thinks X is great, and Jane finds a reason is that Ken has invested in the vendor of X, so he encourages everyone to buy X's products. Again Jane can discount Ken's reasoning just for that vendor.
On the other hand, if Ken uses X daily in his work and finds it helps him in tasks similar to what Jane was hoping to accomplish, then Jane may raise the importance of Ken's reasoning.
CLOSER TO REAL INTERNET LIFE: The TB/SB mail reader now has a warning
that says "this message might be a scam", but gives the user no
explanation as to how that judgement was reached, nor gives users any
evidence with which to reproduce the conclusion for themselves.
The TB/SB technology is displaying the warning on many reputable newsletters from trusted well-known companies. With no explanation, users cannot reproduce the reasoning, and therefore may come to the general conclusion that the technology may often produce faulty judgements. Users may cope by distrusting the warning and ignoring it or turning it off.
As it turns out, email newsletters often contain readable urls in the text where the underlying link redirects through a hit counter url (so the newsletter publisher can keep track of what is interesting to its readers). The TB/SB detector is detecting a mismatch between the presented url and the actual link (one of several ways it detects scam messages).
If the warning instead said "this message might be a scam: it contains misleading links" (or contained a details button to such an explanation and perhaps a list of offending links), then users can ask for reasons and evidence, look at the links for themselves, reproduce the reasoning, and therefore can more confidently learn to discount the technology's judgement just in the case of offending links in reputable newsletters.
After seeing the above reasoning, the user also knows that when the warning does not appear (the technology does not conclude the current messages is a scam), that one reason is that the links are not misleading. By knowing which evidence the technology is examining, users can make better judgements about what it might miss. For example, since the technology looks at the text, it might miss a text url link that is actually an image (like spam filters do), and so users can learn they must remain more vigilent when a message contains image attachments, or when the text is not a url.
Gekacheka 17:59, 31 Jan 2006 (PST)
iang 16 Feb 2006 (PST)
I think there is a difficult assumption in your position paper that leaves it wobbly - that of security being a standard. There is no particular reason for this, although, granted it is nice to have a standard, when all the fighting is over, as that makes it easier to compete.
Unfortunately - as you recognise - we are in a world which is beset by attackers. If we knew how to stop those attackers - we'd just go and do that, wouldn't we?
But, we don't - as a web community. There are lots of competing ideas out there, and the inventors of these ideas are battling it out to see which ones emerge. Of necessity, these ideas are all different. See major point above. And of same necessity, they all have to be trialled in the market place so as to prove themselves.
Which means we have to go through a period of chaos, and there isn't much point in fighting it. Standards in web security are off the agenda for the next couple of years, I'd predict, but they'll come back after it all settles down and the winning ideas are clear.
terminology suggestions
I think some of the main concepts being developed in this article are right on track.
What's IRL? How about IRW for "In the Real World"?
In the section "Signals, IRL vs. Online", dimensions isn't the best term. I think most people would agree that the real world has 3 dimensions of space and 1 of time (except for String Theorists)! How about aspects, hints, cues, clues, characteristics, signatures, vectors, or data-types? --Bill M 08:11, 19 Mar 2006 (PST)