This article has multiple issues. Please help talk page. (Learn how and when to remove these template messages)
( or discuss these issues on the Learn how and when to remove this template message)

In computer science, randomaccess machine (RAM) is an abstract machine in the general class of register machines. The RAM is very similar to the counter machine but with the added capability of 'indirect addressing' of its registers. Like the counter machine the RAM has its instructions in the finitestate portion of the machine (the socalled Harvard architecture).
The RAM's equivalent of the universal Turing machine  with its program in the registers as well as its data  is called the randomaccess storedprogram machine or RASP. It is an example of the socalled von Neumann architecture and is closest to the common notion of computer.
Together with the Turing machine and countermachine models, the RAM and RASP models are used for computational complexity analysis. Van Emde Boas (1990) calls these three plus the pointer machine "sequential machine" models, to distinguish them from "parallel randomaccess machine" models.
The concept of a randomaccess machine (RAM) starts with the simplest model of all, the socalled counter machine model. Two additions move it away from the counter machine, however. The first enhances the machine with the convenience of indirect addressing; the second moves the model toward the more conventional accumulatorbased computer with the addition of one or more auxiliary (dedicated) registers, the most common of which is called "the accumulator".
A randomaccess machine (RAM) is an abstract computationalmachine model identical to a multipleregister counter machine with the addition of indirect addressing. At the discretion of an instruction from its finite state machine's TABLE, the machine derives a "target" register's address either (i) directly from the instruction itself, or (ii) indirectly from the contents (e.g. number, label) of the "pointer" register specified in the instruction.
By definition: A register is a location with both an address (a unique, distinguishable designation/locator equivalent to a natural number) and a content  a single natural number. For precision we will use the quasiformal symbolism from BoolosBurgessJeffrey (2002) to specify a register, its contents, and an operation on a register:
Definition: A direct instruction is one that specifies in the instruction itself the address of the source or destination register whose contents will be the subject of the instruction. Definition: An indirect instruction is one that specifies a "pointer register", the contents of which is the address of a "target" register. The target register can be either a source or a destination (the various COPY instructions provide examples of this). A register can address itself indirectly.
Definition: The contents of source register is used by the instruction. The source register's address can be specified either (i) directly by the instruction, or (ii) indirectly by the pointer register specified by the instruction.
Definition: The contents of the pointer register is the address of the "target" register.
Definition: The contents of the pointer register points to the target register  the "target" may be either a source or a destination register.
Definition: The destination register is where the instruction deposits its result. The source register's address can be specified either (i) directly by the instruction, or (ii) indirectly by the pointer register specified by the instruction. The source and destination registers can be one
The register machine has, for a memory external to its finitestate machine  an unbounded (cf: footnotecountable and unbounded) collection of discrete and uniquely labelled locations with unbounded capacity, called "registers". These registers hold only natural numbers (zero and the positive integers). Per a list of sequential instructions in the finite state machine's TABLE, a few (e.g. 2) types of primitive operations operate on the contents of these "registers". Finally, a conditionalexpression in the form of an IFTHENELSE is available to test the contents of one or two registers and "branch/jump" the finite state machine out of the default instructionsequence.
Base model 1: The model closest to Minsky's (1961) visualization and to Lambek (1961):
Instruction  Mnemonic  Action on register(s) "r"  Action on finite state machine's Instruction Register, IR 

INCrement  INC ( r )  [r] + 1 > r  [IR] + 1 > IR 
DECrement  DEC ( r )  [r]  1 > r  [IR] + 1 > IR 
Jump if Zero  JZ ( r, z )  none  IF [r] = 0 THEN z > IR ELSE [IR] + 1 > IR 
Halt  H  none  [IR] > IR 
Base model 2: The "successor" model (named after the successor function of the Peano axioms):
Instruction  Mnemonic  Action on register(s) "r"  Action on finite state machine's Instruction Register, IR 

CLeaR  CLR ( r )  0 > r  [IR] + 1 > IR 
INCrement  INC ( r )  [r] + 1 > r  [IR] + 1 > IR 
Jump if Equal  JE (r1, r2, z)  none  IF [r1] = [r2] THEN z > IR ELSE [IR] + 1 > IR 
Halt  H  none  [IR] > IR 
Base model 3: Used by ElgotRobinson (1964) in their investigation of bounded and unbounded RASPs  the "successor" model with COPY in the place of CLEAR:
Instruction  Mnemonic  Action on register(s) "r"  Action on finite state machine's Instruction Register, IR 

COPY  COPY (r1, r2)  [r1] > r2  [IR] + 1 > IR 
INCrement  INC ( r )  [r] + 1 > r  [IR] + 1 > IR 
Jump if Equal  JE (r1, r2, z)  none  IF [r1] = [r2] THEN z > IR ELSE [IR] + 1 > IR 
Halt  H  none  [IR] > IR 
The three base sets 1, 2, or 3 above are equivalent in the sense that one can create the instructions of one set using the instructions of another set (an interesting exercise: a hint from Minsky (1967)  declare a reserved register e.g. call it "0" (or Z for "zero" or E for "erase") to contain the number 0). The choice of model will depend on which an author finds easiest to use in a demonstration, or a proof, etc.
Moreover, from base sets 1, 2, or 3 we can create any of the primitive recursive functions ( cf Minsky (1967), BoolosBurgessJeffrey (2002) ). (How to cast the net wider to capture the total and partial mu recursive functions will be discussed in context of indirect addressing). However, building the primitive recursive functions is difficult because the instruction sets are so ... primitive (tiny). One solution is to expand a particular set with "convenience instructions" from another set:
Again, all of this is for convenience only; none of this increases the model's intrinsic power.
For example: the most expanded set would include each unique instruction from the three sets, plus unconditional jump J (z) i.e.:
Most authors pick one or the other of the conditional jumps, e.g. ShepherdsonSturgis (1963) use the above set minus JE (to be perfectly accurate they use JNZ  Jump if Not Zero in place of JZ; yet another possible convenience instruction).
In our daily lives the notion of an "indirect operation" is not unusual.
Indirection specifies a location identified as the pirate chest in "Tom_&_Becky's_cave..." that acts as a pointer to any other location (including itself): its contents (the treasure map) provides the "address" of the target location "under_Thatcher's_front_porch" where the real action is occurring.
In the following one must remember that these models are abstract models with two fundamental differences from anything physically real: unbounded numbers of registers each with unbounded capacities. The problem appears most dramatically when one tries to use a countermachine model to build a RASP that is Turing equivalent and thus compute any partial mu recursive function:
So how do we address a register beyond the bounds of the finite state machine? One approach would be to modify the programinstructions (the ones stored in the registers) so that they contain more than one command. But this too can be exhausted unless an instruction is of (potentially) unbounded size. So why not use just one "überinstruction"  one really really big number  that contains all the program instructions encoded into it! This is how Minsky solves the problem, but the Gödel numbering he uses represents a great inconvenience to the model, and the result is nothing at all like our intuitive notion of a "stored program computer".
Elgot and Robinson (1964) come to a similar conclusion with respect to a RASP that is "finitely determined". Indeed it can access an unbounded number of registers (e.g. to fetch instructions from them) but only if the RASP allows "self modification" of its program instructions, and has encoded its "data" in a Gödel number (Fig. 2 p. 396).
In the context of a more computerlike model using his RPT (repeat) instruction Minsky (1967) tantalizes us with a solution to the problem (cf p. 214, p. 259) but offers no firm resolution. He asserts:
He offers us a bounded RPT that together with CLR (r) and INC (r) can compute any primitive recursive function, and he offers the unbounded RPT quoted above that as playing the role of ? operator; it together with CLR (r) and INC (r) can compute the mu recursive functions. But he does not discuss "indirection" or the RAM model per se.
From the references in Hartmanis (1971) it appears that Cook (in his lecture notes while at UC Berkeley, 1970) has firmed up the notion of indirect addressing. This becomes clearer in the paper of Cook and Reckhow (1973)  Cook is Reckhow's Master's thesis advisor. Hartmanis' model  quite similar to Melzak's (1961) model  uses two and threeregister adds and subtracts and two parameter copies; Cook and Reckhow's model reduce the number of parameters (registers called out in the program instructions) to one callout by use of an accumulator "AC".
The solution in a nutshell: Design our machine/model with unbounded indirection  provide an unbounded "address" register that can potentially name (call out) any register no matter how many there are. For this to work, in general, the unbounded register requires an ability to be cleared and then incremented (and, possibly, decremented) by a potentially infinite loop. In this sense the solution represents the unbounded ? operator that can, if necessary, hunt ad infinitim along the unbounded string of registers until it finds what it is looking for. The pointer register is exactly like any other register with one exception: under the circumstances called "indirect addressing" it provides its contents, rather than the addressoperand in the state machine's TABLE, to be the address of the target register (including possibly itself!).
If we eschew the Minsky approach of one monster number in one register, and specify that our machine model will be "like a computer" we have to confront this problem of indirection if we are to compute the recursive functions (also called the ?recursive functions )  both total and partial varieties.
Our simpler countermachine model can do a "bounded" form of indirection  and thereby compute the subclass of primitive recursive functions  by using a primitive recursive "operator" called "definition by cases" (defined in Kleene (1952) p. 229 and BoolosBurgessJeffrey p. 74). Such a "bounded indirection" is a laborious, tedious affair. "Definition by cases" requires the machine to determine/distinguish the contents of the pointer register by attempting, time after time until success, to match this contents against a number/name that the case operator explicitly declares. Thus the definition by cases starts from e.g. the lower bound address and continues ad nauseam toward the upper bound address attempting to make a match:
"Bounded" indirection will not allow us to compute the partial recursive functions  for those we need unbounded indirection aka the ? operator.
To be Turing equivalent the counter machine needs to either use the unfortunate singleregister Minsky Gödel number method, or be augmented with an ability to explore the ends of its register string, ad infinitum if necessary. (A failure to find something "out there" defines what it means for an algorithm to fail to terminate; cf Kleene (1952) pp. 316ff Chapter XII Partial Recursive Functions, in particular p. 323325.) See more on this in the example below.
For unbounded indirection we require a "hardware" change in our machine model. Once we make this change the model is no longer a counter machine, but rather a randomaccess machine.
Now when e.g. INC is specified, the finite state machine's instruction will have to specify where the address of the register of interest will come from. This where can be either (i) the state machine's instruction that provides an explicit label, or (ii) the pointerregister whose contents is the address of interest. Whenever an instruction specifies a register address it now will also need to specify an additional parameter "i/d"  "indirect/direct". In a sense this new "i/d" parameter is a "switch" that flips one way to get the direct address as specified in the instruction or the other way to get the indirect address from the pointer register (which pointer register  in some models every register can be a pointer register  is specified by the instruction). This "mutually exclusive but exhaustive choice" is yet another example of "definition by cases", and the arithmetic equivalent shown in the example below is derived from the definition in Kleene (1952) p. 229.
Probably the most useful of the added instructions is COPY. Indeed, ElgotRobinson (1964) provide their models P_{0} and P'_{0} with the COPY instructions, and CookReckhow (1973) provide their accumulatorbased model with only two indirect instructions  COPY to accumulator indirectly, COPY from accumulator indirectly.
A plethora of instructions: Because any instruction acting on a single register can be augmented with its indirect "dual" (including conditional and unconditional jumps, cf the ElgotRobinson model), the inclusion of indirect instructions will double the number of single parameter/register instructions (e.g. INC (d, r), INC (i, r)). Worse, every two parameter/register instruction will have 4 possible varieties, e.g.:
In a similar manner every threeregister instruction that involves two source registers r_{s1} r_{s2} and a destination register r_{d} will result in 8 varieties, for example the addition:
will yield:
If we designate one register to be the "accumulator" (see below) and place strong restrictions on the various instructions allowed then we can greatly reduce the plethora of direct and indirect operations. However, one must be sure that the resulting reduced instructionset is sufficient, and we must be aware that the reduction will come at the expense of more instructions per "significant" operation.
Historical convention dedicates a register to the accumulator, an "arithmetic organ" that literally accumulates its number during a sequence of arithmetic operations:
However, the accumulator comes at the expense of more instructions per arithmetic "operation", in particular with respect to what are called 'readmodifywrite' instructions such as "Increment indirectly the contents of the register pointed to by register r2 ". "A" designates the "accumulator" register A:
Label  Instruction  A  r2  r378,426  Description  

. . .  378,426  17  
INCi ( r2 ):  CPY ( i, r2, d, A )  17  378,426  17  Contents of r2 points to r378,426 with contents "17": copy this to A  
INC ( A )  18  378,426  17  Incement contents of A  
CPY ( d, A, i, r2 )  18  378,426  18  Contents of r2 points to r378,426: copy contents of A into r378,426 
If we stick with a specific name for the accumulator, e.g. "A", we can imply the accumulator in the instructions, for example,
However, when we write the CPY instructions without the accumulator called out the instructions are ambiguous or they must have empty parameters:
Historically what has happened is these two CPY instructions have received distinctive names; however, no convention exists. Tradition (e.g. Knuth's (1973) imaginary MIX computer) uses two names called LOAD and STORE. Here we are adding the "i/d" parameter:
The typical accumulatorbased model will have all its twovariable arithmetic and constant operations (e.g. ADD (A, r), SUB (A, r) ) use (i) the accumulator's contents, together with (ii) a specified register's contents. The onevariable operations (e.g. INC (A), DEC (A) and CLR (A) ) require only the accumulator. Both instructiontypes deposit the result (e.g. sum, difference, product, quotient or remainder) in the accumulator.
If we so choose, we can abbreviate the mnemonics because at least one sourceregister and the destination register is always the accumulator A. Thus we have :
If our model has an unbounded accumulator can we bound all the other registers? Not until we provide for at least one unbounded register from which we derive our indirect addresses.
The minimimalist approach is to use itself (Schönhage does this).
Another approach (Schönhage does this too) is to declare a specific register the "indirect address register" and confine indirection relative to this register (Schonhage's RAM0 model uses both A and N registers for indirect as well as direct instructions). Again our new register has no conventional name  perhaps "N" from "iNdex", or "iNdirect" or "address Number".
For maximum flexibility, as we have done for the accumulator A  we will consider N just another register subject to increment, decrement, clear, test, direct copy, etc. Again we can shrink the instruction to a singleparameter that provides for direction and indirection, for example.
Why is this such an interesting approach? At least two reasons:
(1) An instruction set with no parameters:
Schönhage does this to produce his RAM0 instruction set. See section below.
(2) Reduce a RAM to a PostTuring machine:
Posing as minimalists, we reduce all the registers excepting the accumulator A and indirection register N e.g. r = { r0, r1, r2, ... } to an unbounded string of (very) boundedcapacity pigeonholes. These will do nothing but hold (very) bounded numbers e.g. a lone bit with value { 0, 1 }. Likewise we shrink the accumulator to a single bit. We restrict any arithmetic to the registers { A, N }, use indirect operations to pull the contents of registers into the accumulator and write 0 or 1 from the accumulator to a register:
We push further and eliminate A altogether by the use of two "constant" registers called "ERASE" and "PRINT": [ERASE]=0, [PRINT]=1.
Rename the COPY instructions and call INC (N) = RIGHT, DEC (N) = LEFT and we have the same instructions as the PostTuring machine, plus an extra CLRN :
In the section above we informally showed that a RAM with an unbounded indirection capability produces a PostTuring machine. The PostTuring machine is Turing equivalent, so we have shown that the RAM with indirection is Turing equivalent.
We give here a slightly more formal demonstration. Begin by designing our model with three reserved registers "E", "P", and "N", plus an unbounded set of registers 1, 2, ..., n to the right. The registers 1, 2, ..., n will be considered "the squares of the tape". Register "N" points to "the scanned square" that "the head" is currently observing. The "head" can be thought of as being in the conditional jump  observe that it uses indirect addressing (cf ElgotRobinson p. 398). As we decrement or increment "N" the (apparent) head will "move left" or "right" along the squares. We will move the contents of "E"=0 or "P"=1 to the "scanned square" as pointed to by N, using the indirect CPY.
The fact that our tape is leftended presents us with a minor problem: Whenever LEFT occurs our instructions will have to test to determine whether or not the contents of "N" is zero; if so we should leave its count at "0" (this is our choice as designers  for example we might have the machine/model "trigger an event" of our choosing).
The following table both defines the PostTuring instructions in terms of their RAM equivalent instructions and gives an example of their functioning. The (apparent)location of the head along the tape of registers r0r5 . . . is shown shaded:
Mnemonic  label:  E  P  N  r0  r1  r2  r3  r4  r5  etc.  Action on registers  Action on finite state machine Instruction Register IR  

start:  0  1  3  1  0  
R  right:  INC ( N )  0  1  4  1  0  [N] +1 > N  [IR] +1 > IR  
P  print:  CPY ( d, P, i, N )  0  1  4  1  1  [P]=1 > [N]=r4  [IR] +1 > IR  
E  erase:  CPY ( d, E, i, N )  0  1  4  1  0  [E]=0 > [N]=r4  [IR] +1 > IR  
L  left:  JZ ( i, N, end )  0  1  4  1  0  none  IF N =r4] =0 THEN "end" > IR else [IR]+1 > IR  
DEC ( N )  0  1  3  1  0  [N] 1 > N  
J0 ( halt )  jump_if_blank:  JZ ( i, N, end )  0  1  3  1  0  none  IF N =r3] =0 THEN "end" > IR else [IR]+1 > IR  
J1 ( halt )  jump_if_mark:  JZ ( i, N, halt )  0  1  3  1  0  N =r3] > A  IF N =r3] =0 THEN "end" > IR else [IR]+1 > IR  
end  . . . etc.  0  1  3  1  0  
halt:  H  0  1  3  1  0  none  [IR] +1 > IR 
Throughout this demonstration we have to keep in mind that the instructions in the finite state machine's TABLE is bounded, i.e. finite:
We will build the indirect CPY ( i, q, d, ? ) with the CASE operator. The address of the target register will be specified by the contents of register "q"; once the CASE operator has determined what this number is, CPY will directly deposit the contents of the register with that number into register "?". We will need an additional register that we will call "y"  it serves as an upcounter.
The CASE "operator" is described in Kleene (1952) (p. 229) and in BoolosBurgessJeffrey (2002) (p. 74); the latter authors emphasize its utility. The following definition is per Kleene but modified to reflect the familiar "IFTHENELSE" construction.
The CASE operator "returns" a natural number into ? depending on which "case" is satisfied, starting with "case_0" and going successively through "case_last"; if no case is satisfied then the number called "default" (aka "woops") is returned into ? (here x designates some selection of parameters, e.g. register q and the string r0, ... rlast )):
Definition by cases ? (x, y):
Kleene require that the "predicates" Q_{n} that doing the testing are all mutually exclusive  "predicates" are functions that produce only { true, false } for output; BoolosBurgessJeffrey add the requirement that the cases are "exhaustive".
We begin with a number in register q that represents the address of the target register. But what is this number? The "predicates" will test it to find out, one trial after another: JE (q, y, z) followed by INC (y). Once the number is identified explicitly, the CASE operator directly/explicitly copies the contents of this register to ?:
Case_0 ( the base step of the recursion on y) looks like this:
Case_n (the induction step) looks like this; remember, each instance of "n", "n+1", ..., "last" must be an explicit natural number:
Case_last stops the induction and bounds the CASE operator (and thereby bounds the "indirect copy" operator):
If the CASE could continue ad infinitum it would be the mu operator. But it can't  its finite state machine's "state register" has reached its maximum count (e.g. 65365 = 11111111,11111111_{2} ) or its table has run out of instructions; it is a finite machine, after all.
The commonly encountered Cook and Rechkow model is a bit like the ternaryregister Malzek model (written with Knuth mnemonics  the original instructions had no mnemonics excepting TRA, Read, Print).
LOAD ( C, r_{d} ) ; C > r_{d}
, C is any integerLOAD ( 0, 5 )
will clear register 5.ADD ( r_{s1}, r_{s2}, r_{d} ) ; [r_{s1}] + [r_{s2}] > r_{d}
, the registers can be the same or different;ADD ( A, A, A )
will double the contents of register A.SUB ( r_{s1}, r_{s2}, r_{d} ) ; [r_{s1}]  [r_{s2}] > r_{d}
, the registers can be the same or different:SUB ( 3, 3, 3 )
will clear register 3.COPY ( i, r_{p}, d, r_{d} ) ; [[r_{p}] ] > r_{d}
, Indirectly copy the contents of the sourceregister pointed to by pointerregister r_{p} into the destination register.COPY ( d, r_{s}, i, r_{p} ) ; [r_{s}] > [r_{p}]
. Copy the contents of source register r_{s} into the destinationregister pointed to by the pointerregister r_{p}.JNZ ( r, I_{z} ) ;
Conditional jump if [r] is positive; i.e. IF [r] > 0 THEN jump to instruction z else continue in sequence (Cook and Reckhow call this: "TRAnsfer control to line m if Xj > 0")READ ( r_{d} ) ;
copy "the input" into destination register r_{d}PRINT ( r_{s} ) ;
copy the contents of source register r_{s} to "the output."Schönhage (1980) describes a very primitive, atomized model chosen for his proof of the equivalence of his SMM pointer machine model:
RAM1 model: Schönhage demonstrates how his construction can be used to form the more common, usable form of "successor"like RAM (using this article's mnemonics):
LDA k ; k > A
, k is a constant, an explicit number such as "47"LDA ( d, r ) ; [r] > A ;
directly load ALDA ( i, r ) ; [[r]] > A ;
indirectly load ASTA ( d, r ) ; [A] > r ;
directly store ASTA ( i, r ) ; [A] > [r] ;
indirectly store AJEA ( r, z ) ; IF [A] = [r] then I_{z} else continue
INCA ; [A] + 1 > A
RAM0 model: Schönhage's RAM0 machine has 6 instructions indicated by a single letter (the 6th "C xxx" seems to involve 'skip over next parameter'. Schönhage designated the accumulator with "z", "N" with "n", etc. Rather than Schönhage's mnemonics we will use the mnemonics developed above.
(Z), CLRA: 0 > A
(A), INCA: [A] +1 > A
(N), CPYAN: [A] > N
(A), LDAA: [[A]] > A
; contents of A points to register address; put register's contents into A(S), STAN: [A] > [N]
; contents of N points to register address; put contents of A into register pointed to by N(C), JAZ ( z ): [A] = 0 then go to I_{z}
; ambiguous in his treatmentIndirection comes (i) from CPYAN (copy/transfer contents A to N) working with store_A_via_N STAN, and from (ii) the peculiar indirection instruction LDAA ( [[A]] > [A] )
.
The definitional fact that any sort of counter machine without an unbounded register"address" register must specify a register "r" by name indicates that the model requires "r" to be finite, although it is "unbounded" in the sense that the model implies no upper limit to the number of registers necessary to do its job(s). For example, we do not require r < 83,617,563,821,029,283,746 nor r < 2^1,000,001, etc.
We can escape this restriction by providing an unbounded register to provide the address of the register that specifies an indirect address.
With a few exceptions, these references are the same as those at Register machine.
date=
(help)