This is an example of a fully worked out answer to a question asking for a proof. In a proof, we have to lay out our approach in detail, so one can follow our reasoning. We’ll take a problem in the book as an example. The problem asks you to prove that the set of unsafe protection systems is recursively enumerable.
In writing your answers, realize that there will be a main point or idea in what you write. It’s critical you express it clearly.
In what follows, the proof is in red. How we get to each step is blue.
Let’s begin by remembering what recursively enumerable means. It simply means that the entities in question can be listed. Now we know where we’re going.
We will use the construction in the proof of the HRU result. There, an access control matrix is mapped onto a Turing machine, and the right to be leaked corresponds to the halting state. So we’ll work the problem in terms of executions of Turing machines. A protection system that leaks a right is unsafe; similarly, we will call a Turing machine execution that enters the halting state unsafe. If a right does not leak, the Turing machine execution will not enter the halting state. So, if the execution is not unsafe (i.e., safe), then it will not halt. If it is unsafe, it will.
Our approach is to list all the Turing machine executions so we can run them. We can assign numbers arbitrarily, but that may result in two Turing machine executions that are the same having different numbers. So we need to tie the numbering to the Turing machine execution itself. Fortunately, there is an easy way to do this — use Gödel numbers. These encode the symbols of the tape on the Turing machine, and hence if two outputs are the same, the Gödel numbers are the same. So our list consists of Turing machine executions in the order of their Gödel numbers.
In this answer, the main idea is the need to run the Turing machines in parallel, so if one Turing machine does not terminate, you won’t wait forever to start the next one.
Next, say how you deal with the main point or idea. Here, you would run the first machine some number of steps, then the second some number of steps, then the third, then the second, then the first, and so forth. This runs the machines in parallel. This is called a diagonalization technique (to see why, look at the picture below).
Finally, you need to make the argument rigorous. That may mean to use mathematics or notations; it may mean to reason through the main point or idea (as here).
For grading, we will look first for the main idea, then how you deal with it, and lastly with the rigor. If your argument is correct but not rigorous, you will get most of the points. If you get the main idea, then you’ll get a good amount of points. So clarity of expression is important.
Question. Prove Theorem 3.3. (Hint: Use a diagonalization argument to test each system as the set of protection systems is enumerated. Whenever a protection system leaks a right, add it to the list of unsafe protection systems.)
Answer: Represent the set of all possible systems as a set of executions of Turing machines. Each such execution has a unique Gödel number. Order the executions by their Gödel numbers (each such execution is represented by TMi, where i is the ith Turing machine in the list).
We also have to define when an execution stops. When a Turing machine execution stops, it enters the halting state. We’ll call this state qf (the “f” for “final”). So in terms of this problem, TMi stops if, and only if, it enters state qf.
Now, TMi halts in state qf if, and only if, the right in question leaks; that is, if, and only if, the system halts in an unsafe state.
If it doesn’t halt at a given time, it may halt later on, or it may never halt.
If TMi does not halt, it may be safe (and so never halt), or it may simply not yet have reached its unsafe (halting) state.
It’s tempting to run the first machine until the execution stops, and then the second, and so forth. But what happens if the first machine execution is safe? It will never halt, so we’ll never get to the second execution.
This means we cannot serially execute the systems, proceeding to TMi+1 when TMi halts. If we did that, and TMi never halted, we would never begin executing TMi+1 and so could not enumerate the unsafe systems in the list with index greater than i.
So we have to interleave machine executions. The obvious approach is to run the first machine for one step, then another, then run the second machine for one step, the first for another step, then the second for another step, then the third, and so forth. This is called a diagonalization approach. The advantage of this is that any machine that is unsafe and will eventually halt, and we can drop it from the pattern. But other, safe machines will not halt. So every such machine advances.
So, use a diagonizational technique. Execute the first instruction in TM1. Execute the second instruction in TM1. Execute the first instruction in TM2. Execute the third instruction in TM1. Execute the second instruction in TM2. Execute the first instruction in TM3. Execute the fourth instruction in TM1. Execute the third instruction in TM2. Continue this pattern of execution, indicated in the following picture by numbers representing the order in which the steps are executed:
So we can enumerate the unsafe executions simply by recording the numbers of those that halt.
Now, when TM_i halts, it is added to the list of unsafe systems. The diagonalization procedure is modified to skip over the halted TM_i. Thus, all systems which halt in state q_f will be enumerated. So we can list the unsafe Turing machine executions.
Now we map this back into protection systems.
As we can list the unsafe Turing machine executions, we can list the corresponding protection systems in which the right leaks. Hence the set of unsafe protection systems is recursively enumerable.
ECS 235A, Computer and Information Security
Version of December 27, 2021 at 2:07PM
|You can also obtain a PDF version of this.|