Question:
The Music Industry is trying hard to develop a secure music format. They
are hoping this will enable them to protect better against privacy. But
with the MP3 format already out there, and freely copyable, would a new
secure format work? Would people accept whaat the RIAA is proposing, or
would they stick with MP3s and get free music (albeit illegal for
copyrighted music)?
Response:
As MP3 is not legally banned, and one can make MP3 recordings from CDs,
I suspect that format will remain. If the material is uncopyrighted
or distributed with the copyright holder's permission, there should
be no problem. As to whether people will use the new secure format,
I suspect the answer would tell us more about people than about the
technology.
Question:
In class Bishop mentioned a hack of Netscape SSL that involved the
initialization routine using specific local information. Since an
algorithm that attempts to produce a random number must rely on a somewhat
random local number that would easy to figure out (time, size of specific
file, etc.). But those seed values if public would be easy to determine
if the attacker has access to the machine, as in the csif labs. Is there
any clever way to find random seed values that cannot be found?
(Intel is trying to do this by measuring specific temperature values to the finest detail which they believe to be completely random)
Response:
Aside from measuring truly random physical phenomena,
you have to gather a set of highly volatile data and mix
it thoroughly. One common technique is to run the output of the
commands ps gaux (or ps ef, whichever your system
understands), vmstat, and netstat -a through a
cryptographic hashing function like MD5 or SHA-1. This seems to
work well enough in practise.
For more details on the problems of obtaining randomness, I strongly recommend reading RFC 1750, Randomness Recommendations for Security, by Donald Eastlake III, Stephen Crocker, and Jeffrey Schiller (Dec. 1994).
Question:
In Chapter 1 of the text, under the section titled "Goals of Security", it
is stated
that sometimes, "retaliation (by attacking the attacker's system) is part of
recovery."
The attacker was wrong, but what makes it suddenly okay for the 'attackee' to attack back? It seems to me that this would make them equally wrong. So WHY is this sometimes a part of the recovery process?
Response:
Yes, in general I agree: responding by attacking back is very bad. However,
there are two circumstances that I can think of that would warrant such
a reaction. First, if the computers were in two different countries
that were at war with one another, then an attack is an act of war and
legally such a response is justified (if I understand correctly what little
I was taught about international law). Second, an attack to obtain
information (done under a court order) or to disable the mechanism
used to attack your site provided that was the only thing the
counterattack did might be defensible. Both of these are
extreme circumstances, and in the second, every alternate mechanism
should be tried first, including contacting the system administrators
of the remote site.
Question:
What are the differences between SSH version 1, and SSH version 2? More
specifically, I've heard a number of "security people" say that they use
SSH v.1, even though version 2 is now available, what is the problem with
version 2 that would convince people not to switch?
Response:
I don't know of any reason that people would not switch, unless they
used Macs; SSH 2 is not available for them (as far as I know!). The
SSH
web site has a page on the differences;
the main one is that SSH version 2 allows more public key algorithms than
only RSA, and more classical (session) cryptodydtems than does SSH version 1.
Question:
I've just seen an advertisement for a device called "Ethentica". Its a
biometric (fingerprint) authentication system. I see how biometrics could
be more secure if you are authenticating yourself locally to a physically
secure machine, however it seems to me that biometric authentication would
be just as weak as long passwords (still sniffable, etc) for remote
authentication. So my quetion is : Is there any benefit to using
biometric authentication remotely?
Response:
This is a problem with any remote authentication scheme.
The specific assumptions are that the remote system can be trusted,
and the communication of those results is trusted.
The usual technique is to encipher the connection between the two systems,
so the same data will appear different to the snoopers.
Question:
If a person uses a known attack from the internet say from hackers.com
website to break into a system destroying information. Can the developers
of the web site where the attack was taken be held responsible or does the
disclaimer at the site sheild them? If not then what is the extent that
they can be held responsible?
Response:
At this point no-one knows. Your question is good. The problem is that
the law is unsettled. Here's another version of what you ask: if someone
breaks into the CSIF, and from that system attacks a company, is
the University of California liable in some way to the attacked company?
Question:
Can't encryption actually *impair*
security? After all, isn't that why the U.S. didn't want 128-bit
encryption in the hands of other countries?
Response:
I think it depends on your point of view. Suppose the U. S. and the
Grand Duchy of Fenwick are having diplomatic problems (the U. S. is
accusing the Grand Duchy of stealing the Q-bomb, for example). It
is to the U. S.'s security interests to be able to read the diplomatic
traffic between the Grand Duchy and its diplomatic personnel stationed
in the U. S. Similarly, it is in the Grand Duchy's security interests
to keep that traffic confidential (ignoring the notion of
disinformation, of course ...).
Also, a minor clarification. I very firmly believe that all (or almost all) U. S. military personnel and government officials realized that other countries could obtain 128 bit encryption regardless of what the U. S. did to keep it from them (for example, Samir and Biham, whom I mentioned in class, are Israelis, and the folks who won the AES competition are Belgian). I think what they feared was the countries getting access to sophisticated hardware to do the encryption, which is a much more reasonable concern. But governments and bureaucracies being what they are, the regulations were drafted badly, and the problem was categorized badly (where all cryptography was considered a "munitions"). Hence the current mess.
Question:
I have a question about the
security of ssh when using a rotating password scheme. Lets say that
the client (user) has a program on his end which generates a sequence
of passwords such that once a password is used on the server (host) it
is no longer valid and the user must use the next password provided by
his password generator. Only the user's program and the server know the
correct sequence of passwords. The point of this is to create a secure
login environment such that if someone were snooping packets during a
login, the password they snooped would not be valid.
Response:
What you're describing is called a one-time password, and we will discuss
it in class. Ssh effectively uses this scheme, as it encapsulates
random data into the password exchange, so the same (reusable) password
and random data are enciphered together and sent over the network. The
result is different for each transmission.
The problem with using one-time passwords over an unprotected network connection is that, after you authenticate, I may be able to "steal" your connection. One-time passwords provide security for authentication only, not for integrity and/or confidentiality.
Question:
If a user on HOST_A types "xhost +" or "xhost HOST_B" on a unix machine,
he allows every host or just a specific host (HOST_B) to display any
x-program on those machines on HOST_A's x-session. This can pose a great
security problem since any person can log into HOST_B, and run a malicious
program. Such programs can be a login/password collecting program that
looks like a login prompt which the user on HOST_A may think is a regular
login prompt to some host he wants to connect to. Or it can be a hacked
x-lock program which will cause the screen to blank with a screen saver
and force the user to enter in his password. These kinds of attacks are
effective when someone in the lab looks away or steps out for a second.
My question is, why doesn't the whole xhost procedure include some form of security mechanism? They can do a number of things such as allow only programs that are run by the same user as whoever is logged into the x-session to go through. They could even go as far as to ask the user on HOST_A if he wants the incoming program to run. Thank You.
Response:
xhost(1) is actually a security mechanism, very much akin to the
.rhosts files of the rlogin(1) programs. It's just very weak.
X does have stronger mechanisms, such as the MIT-MAGIC-COOKIE-1 scheme.
In this case, when a client (program) wants to connect to the X server,
it sends a 128-bit random number (called a cookie) to the
server. The server allows the connection only if the cookie
matches one that it has. This forces the user to either run clients
on the same host as the server (the typical case), or to cut and paste the
cookies so the remote client knows what cookie to use.
The cookies are stored in the .Xauthority file in the
user's home directory. Enciphered cookies are available using the
XDM-AUTHORIZATION-1 protocol. Others exist too.
A good reference is the Xsecurity(1) manual page. It's available on the Linux (PC) systems in the CSIF. If you get the "no such page" mesage, be sure that the directory "/usr/X11R6/man" is in your MANPATH environment variable!
Question:
This question has to do with the example of the Netscape web browser
security implementation. The explanation in class was that the grad
students used the knowledge of how the keys were generated to hack the
cryptography implementation. So you considered this a security
"problem". If Netscape didn't give away the source to their code,
wouldn't this not be a problem then? Isn't it Netscape's fault now?
Response:
At the time this was done, Netscape had not made their source code available,
so this was a problem before source code was released.
The graduate students read the documents describing how Netscape
did their encryption, and guessed that Netscape was using several
system and process values. I suspect they then confirmed their guess by
predicting the key to a session they launched (at least, I would have).
At that point, they could crack sessions.
Whose fault it is is much less important than fixing the problem and making sure it does not recur.
Matt Bishop Office: 3059 Engineering Unit II Phone: +1 (530) 752-8060 Fax: +1 (530) 752-4767 Email: bishop@cs.ucdavis.edu | Copyright Matt Bishop, 2000. All federal and state copyrights reserved for all original material presented in this course through any medium, including lecture or print. |