Second Set of Questions


Question:
How do you decide when to end a penetration study? For ex. when you hire a team that is discovering the loopholes in the system, when do they decide on not spending any more time on that because they're sure they have discovered all loopholes? In other words, is the duration of penetration study a part of the contract or the hacking team decides when to end it based on their experience with the system.

Response:
The duration of a penetration study is part of the contract. It may be a time limit, or the study may end when a specific event (such as sensitive data being obtained) occurs. At no point do testers think they have found all flaws. The study is an attempt to draw conclusions about the general security of the system, and where efforts to find and fix security problems should be concentrated.


Question:
How secure is the Kerberos protocol used by this University? I've read in a magazine that in Kerberos, the keys entered by the user are cached locally. This means in a diskless enviroment, this /tmp directory is stored on a file server and the keys travel across the network. Our data can then be accessed by another party! I've read that the Leighton-Micali scheme is more robust so shouldn't this protocol be preferred over Kerberos? What are some of the advantages of Kerberos that makes this security protocol sufficent for the protection of student's data?

Response:
Kerberos uses a temporary file to store tickets, not cryptographic keys. (A ticket is a token that contains a process identity and is enciphered with the secret key of the server. For more details, see the handout Two Protocols.) The cryptographic keys are stored in memory. On a multiu-user system, these are still potentially visible to processes with root privileges, but it's hard to locate the exact keys in memory. Also, keys are never sent over the network in the clear.

I don't know of software as mature as Kerberos that implements Leighton-Micali. That's a major consideration in the use of cryptosystems: the systems that implement them, how mature and how well integrated those systems are. Kerberos has been around 15 years (the theory 25 years), and the protocol is widely used and implemented. That's not true for Leighton-Micali, at least now.


Question:
Once a security hole/threat has been discovered, and a patch or workaround produced, how good are software manufacturers about avoiding those security holes in future software packages? Do the same security problems continue to arise, even though they may be avoidable?

Response:
This is a sore point among many people. Some vendors are very responsive (Sun, the free UNIX-like systems like FreeBSD and Linux, and Microsoft). Others say it will be fixed in the next release. The problem is that many of the problems arise from a lack of coherent design, leading to an attempt to "patch" things up. And yes, the same security problems continue to arise, even when they are avoidable. If you have any ideas on how to stop this, we'd be very interested. Nothing has seemed to work in the past 25 years!


Question:
On a system where memory is shared, does each user get a certain amount or is there no limit? Could a user fork a process a bunch of times where the only thing that the process does is malloc a bunch of pointers and fork more processes to eat up the system resources and slow things down for other users?

Response:
This varies from system to system. On some systems, the answer is that filling up the process table using fork(2)s would cause it to crash. On others, the system reserves a couple of slots so a root process can terminate the others. On still other systems, each user has a quota of processes, memory, and disk space that prevents the problems you describe.


Question:
This is more of an ethical question. In class today (tues), you mentioned the gaming issue where you found using the game's feature of spawning a shell gave you root access. If your school's policy said explicitly that no one besides privileged sys admin's shoudl have root access, would you be held responsible? even if you reported it right away? (this goes back to the girl finding a hole, exploiting, and reporting). ie, if you find a hole "by accident", does the intent matter?

Response:
This varies, but the general is that discovering a security hole is fine. Exploiting it is not. We found the problem purely by accident (yep, I was one of the ones who found it) and when we realized what was going on, we immediately exited the shell and reported the problem. Folks were amused, as we were considered "trusted" (heck, we were grad student system administrators and system programmers, and we all had the root password).

Had it been an ordinary student, I suspect the test would be: when you figured out you were root, did you exit immediately and report it? If yes, I can't imagine anyone sane taking action against the reporter. If no, it's more of a gray area.


Question:
I am using RESnet at home. Recently, I have received an email from school housing saying that they will cut down the bandwidth with we access off campus sites. The reason they are doing this because they have "MONITORED" that most people are not using it to download mp3, games and etc.

Doesn't this violated our personal privacy?

Response:
By "not using it" do you mean "using it"? I cannot imagine the University requiring you to download MP3, games, and such ... (sorry -- couldn't resist).

I suspect what is happening is this: the bandwidth usage has climbed to the point that the traffic interferes with people using the network. The RESnet administrators and network administrators began to monitor the traffic to figure out what was going on. Typically, this involves looking at the headers of the packets only. If the header has napster port numbers, it's a good guess that MP3 is being used. And so forth. Once they discovered the main protocols in use, they have three options: restrict use of the protocols, or give everyone a certain maximum bandwidth for whatever use you want to put it to, or simply give RESnet a fixed maximum bandwidth. From your question, I don't know which of thse they chose.

I don't think it violates your personal privacy because they couldn't care less what you're listening to, and in fact from your description I doubt they could say, "Well, Matt Bishop listened to Eubie Blake last night and to Scott Joplin tonight ..." I think they were just looking at headers. I don't think that's any different than monitoring water usage to find the big wasters. They don't ask what you do with the water; they just say that you're using 100 times as much as everyone else, and we're in a drought, so we're going to restrict you to what everyone else is using. If you need more, you need to make a case for it.


Matt Bishop
Office: 3059 Engineering Unit II Phone: +1 (530) 752-8060
Fax: +1 (530) 752-4767
Email: bishop@cs.ucdavis.edu
Copyright Matt Bishop, 2000. All federal and state copyrights reserved for all original material presented in this course through any medium, including lecture or print.

Page last modified on 10/27/2000