You are on page 1of 22

SEQUENCE CONTROL WITH EXPRESSIONS

Sequence control refers to user actions and computer logic that initiate, interrupt, or terminate transactions. Sequence control governs the transition from one transaction to the next. General design objectives include consistency of control actions, minimized need for control actions, minimized memory load on the user, with flexibility of sequence control to adapt to different user needs. Methods of sequence control require explicit attention in interface design, and many published guidelines deal with this topic. The importance of good design for controlling user interaction with a computer system has been emphasized by Brown, Brown, Burkleo, Mangelsdorf, Olsen and Perkins (1983, page 4-1): One of the critical determinants of user satisfaction and acceptance of a computer system is the extent to which the user feels in control of an interactive session. If users cannot control the direction and pace of the interaction sequence, they are likely to feel frustrated, intimidated, or threatened by the computer system. Their productivity may suffer, or they may avoid using the system at all. Complete user control of the interaction sequence and its pacing is not always possible, of course, particularly in applications where computer aids are used for monitoring and process control. The actions of an air traffic controller, for example, are necessarily paced in some degree by the job to be done. As a general principle, however, it is the user who should decide what needs doing and when to do it. Programming Control Structures You can write any program by using a combination of three control structures: (1) (2) (3) sequence selection repetition (a.k.a. iteration or looping)

These three structures are the building blocks of all programs; they form the foundation of structured programming.

THE SEQUENCE CONTROL STRUCTURE The sequence control structure is the simplest of the three structures; it is a program segment where statements are executed in sequence, one after the other:

A coding segment that would be represented by the sequence structure is one in which there is no decision-making, looping, or branching it would simply be a series of statements, one after the other,

CONDITIONAL STATEMENTS
In computer science, conditional statements, conditional expressions and conditional constructs are features of a programming language which perform different computations or actions depending on whether a programmer-specified boolean condition evaluates to true or false. Apart from the case of branch predication, this is always achieved by selectively altering the control flow based on some condition. In imperative programming languages, the term "conditional statement" is usually used, whereas in functional programming, the terms "conditional expression" or "conditional construct" are preferred, because these terms all have distinct meanings. Although dynamic dispatch is not usually classified as a conditional construct, it is another way to select between alternatives at runtime.

Classification:

The if statement Using the exit status of a command Comparing and testing input and files if/then/else constructs

if/then/elif/else constructs Using and testing the positional parameters Nested if statements Boolean expressions Using case statements

Introduction to if
At times you need to specify different courses of action to be taken in a shell script, depending on the success or failure of a command. The if construction allows you to specify such conditions. The most compact syntax of the if command is: The test-command list is executed, and if its return status is zero, the consequent-commands list is executed. the return status is the exit status of the last command executed, or zero if no condition tested true. The test-command often involves numerical or string comparison tests, but it can also be any command that returns a status of zero when it succeeds and some other status when it fails.

CASE AND SWITCH STATEMENTS


Switch statements (in some languages, case statements or multiway branches) compare a given value with specified constants and take action according to the first constant to match. There is usually a provision for a default action ('else','otherwise') to be taken if no match succeeds. Switch statements can allow compiler optimizations, such as lookup tables. In dynamic languages, the cases may not be limited to constant expressions, and might extend to pattern matching, as in the shell script example on the right, where the '*)' implements the default case as a regular expression matching any string.

LOOP
In computer programming, a loop is a sequence of instruction s that is continually repeated until a certain condition is reached. Typically, a certain process is done, such as getting an item of data and changing it, and then some condition is checked such as whether a counter has reached a prescribed number. If it hasn't, the next instruction in the sequence is an instruction to return to the first instruction in the sequence and repeat the sequence. If the condition has been reached, the next instruction "falls through" to the next sequential instruction or branches outside the loop. A loop is a fundamental programming idea that is commonly used in writing programs. A loop is a programming language construction that allows the programmer to instruct the computer to perform a certain instruction, or set of instructions over and over again.

There are many different types, but some are omitted from programming languages by virtue of the fact that they are variations on an existing loop type, and as such the same behaviour can be achieved without offering a specific loop type to cover it. However, we cover the three types which are common to most programming languages: Condition Tested Loops Counted Loops Endless Loops

Of these, only the condition tested loop is vital; the other two just give a more convenient way of achieving the same effect. Condition Tested Loops A condition tested loop is one which repeats a set of instructions until a certain condition is reached. The test can be performed at the start of the loop (before any of the instructions are executed), during the loop, or at the end of the loop. Usually, the condition will be testing the result of executing the statements that are inside the loop. As such, loops are contained within a looping start statement (BEGIN, REPEAT, LOOP etc.) and a closing statement, which may be the condition (UNTIL , WHILE etc.) or not (END LOOP, for example).

Counted Loops
A counted loop is one which allows the programmer to instruct the computer to perform a set of instructions x times, where x is usually an integer value, but some programming languages offer other data types. One could argue that the counted loop is just a condition tested loop which updates a counter and exits once a given value is reached. For this reason, some of the lesser programming languages dispense with the counted loop altogether. The major languages, however, retain the counted loop, which has a starting statement (usually the keyword FOR), which contains the condition (as in FOR I = 1 TO 10), and a closing statement (NEXT, END FOR etc.) which sends the execution back up to the top of the loop again.

Endless Loops
An endless loop goes round and round until one of three things happens: The computer is turned off (or the application stopped, forcefully) The computer encounters an EXIT (or similar) statement An error forces the application to 'crash' REPEAT UNTIL FALSE

Perhaps the most useless endless loop is the BASIC: This single line does absolutely nothing. It also has an equivalent in all other programming languages, and can be constructed in even the most rudimentary of them.

However, some endless loops serve a purpose, in message loops, for example, where it is necessary to continually monitor for incoming messages from the operating system.

EXCEPTION HANDLING

Exception handling is a programming language construct or computer hardware mechanism designed to handle the occurrence of exceptional conditions: conditions requiring special processing, often changing the normal flow of program execution. In general, an exception is handled (resolved) by saving the current state of execution in a predefined place and switching the execution to a specific subroutine known as an exception handler. If exceptions are continuable, the handler may later resume the execution at the original location using the saved information. For example, a floating point divide by zero exception will typically, by default, allow the program to be resumed, while an out-of-memory condition might not be resolvable transparently. Alternative approaches to exception handling in software are error checking, which maintains normal program flow with later explicit checks for contingencies, using returned values or some auxiliary global variable such as C's errno; or input validation.

Exception handling in software


From the point of view of the author of a routine, raising an exception is a useful way to signal that a routine could not execute normally - for example, when an input argument is invalid (e.g. value is outside of the domain of a function) or when a resource it relies on is unavailable (like a missing file, a hard disk error, or out-of-memory errors). In systems without exceptions, routines would need to return some special error code. However, this is sometimes complicated by the semipredicate problem, in which users of the routine need to write extra code to distinguish normal return values from erroneous ones. One mechanism for raising an exception is known as a throw. The exception is said to be thrown. Execution is transferred to a "catch".

Contemporary applications face many design challenges when considering exception handling strategies. Particularly in modern enterprise level applications, exceptions must often cross process boundaries and machine boundaries. Part of designing a solid exception handling strategy is recognizing when a process has failed to the point where it cannot be economically handled by the software portion of the process.

Exception safety
A piece of code is said to be exception-safe, if run-time failures within the code will not produce ill effects, such as memory leaks, garbled stored data, or invalid output. Exception-safe code must satisfy invariants placed on the code, even if exceptions occur. There are several levels of exception safety:
Failure transparency, also known as the no throw guarantee: Operations are guaranteed

to succeed and satisfy all requirements even in presence of exceptional situations. If an exception occurs, it will not throw the exception further up. (Best level of exception safety.)
Commit or rollback semantics, also known as strong exception safety or no-change

guarantee: Operations can fail, but failed operations are guaranteed to have no side effects so all data retain original values.
Basic exception safety: Partial execution of failed operations can cause side effects, but

invariants on the state are preserved. Any stored data will contain valid values, even if data has different values now from before the exception.
Minimal exception safety also known as no-leak guarantee: Partial execution of failed

operations may store invalid data, but will not cause a crash, and no resources get leaked.
No exception safety: No guarantees are made (Worst level of exception safety).

For instance, consider a smart vector type, such as C++'s std::vector or Java's ArrayList. When an item x is added to a vector v, the vector must actually add x to the internal list of objects and also update a count field that says how many objects are in v. It may also need to allocate new memory if the existing capacity isn't large enough. This memory allocation may fail and throw an exception. Because of this, a vector that provides failure transparency would be very difficult or impossible to write. However, the vector may be able to offer the strong exception guarantee fairly easily; in this case, either the insertion of x into v will succeed, or v will remain unchanged. If the vector provides only the basic exception safety guarantee, if the insertion fails, v may or may not contain x, but at least it will be in a consistent state. However, if the vector makes only the minimal guarantee, it's possible that the vector may be invalid. For instance, perhaps the size field of v was incremented, but x wasn't actually inserted, making the state inconsistent. Of course, with no guarantee, the program may crash; perhaps the vector needed to expand, but couldn't allocate the

memory and blindly ploughs ahead as if the allocation succeeded, touching memory at an invalid address. Usually at least basic exception safety is required. Failure transparency is difficult to implement, and is usually not possible in libraries where complete knowledge of the application is not available.

Verification of exception handling


The point of exception handling routines is to ensure that the code can handle error conditions. In order to establish that exception handling routines are sufficiently robust, it is necessary to present the code with a wide spectrum of invalid or unexpected inputs, such as can be created via software fault injection and mutation testing (which is also sometimes referred to as fuzz testing). One of the most difficult types of software for which to write exception handling routines is protocol software, since a robust protocol implementation must be prepared to receive input that does not comply with the relevant specification(s). In order to ensure that meaningful regression analysis can be conducted throughout a software development lifecycle process, any exception handling verification should be highly automated, and the test cases must be generated in a scientific, repeatable fashion. Several commercially available systems exist that perform such testing. In runtime engine environments such as Java or .NET, there exist tools that attach to the runtime engine and every time that an exception of interest occurs, they record debugging information that existed in memory at the time the exception was thrown (call stack and heap values). These tools are called automated exception handling or error interception tools and provide 'root-cause' information for exceptions.

Exception support in programming languages


Actionscript, Ada, BlitzMax, C++, C#, D, ECMAScript, Eiffel, Java, ML, Object Pascal (e.g. Delphi, Free Pascal, and the like), Objective-C, Ocaml, PHP (as of version 5), PL/1, Prolog, Python, REALbasic, Ruby, Visual Prolog and most .NET languages have built-in support for exceptions and exception handling. Exception handling is commonly not resumable in those languages, and the event of an exception (more precisely, an exception handled by the language) searches back through the stack of function calls until an exception handler is found, with some languages calling for unwinding the stack as the search progresses. That is, if function f contains a handler H for exception E, calls function g, which in turn calls function h, and an exception occurs in h, then functions h and g may be terminated, and H in f will handle E. An

exception-handling language for which this is not true is Common Lisp with its Condition System. Common Lisp calls the exception handler and does not unwind the stack. This allows to continue the computation at exactly the same place where the error occurred (for example when a previously missing file is now available). Mythryl's stackless implementation supports constant-time exception handling without stack unwinding. Excluding minor syntactic differences, there are only a couple of exception handling styles in use. In the most popular style, an exception is initiated by a special statement (throw, or raise) with an exception object (e.g. with Java or Object Pascal) or a value of a special extendable enumerated type (e.g. with Ada). The scope for exception handlers starts with a marker clause (try, or the language's block starter such as begin) and ends in the start of the first handler clause (catch, except, rescue). Several handler clauses can follow, and each can specify which exception types it handles and what name it uses for the exception object. A few languages also permit a clause (else) that is used in case no exception occurred before the end of the handler's scope was reached. More common is a related clause (finally, or ensure), that is executed whether an exception occurred or not, typically to release resources acquired within the body of the exceptionhandling block. Notably, C++ does not need and does not provide this construct, and the ResourceAcquisition-Is-Initialization technique should be used to free such resources instead.

Exception handling implementation


The implementation of exception handling in programming languages typically involves a fair amount of support from both a code generator and the runtime system accompanying a compiler. (It was the addition of exception handling to C++ that ended the useful lifetime of the original C++ compiler, C front. ) Two schemes are most common. The first, dynamic registration, generates code that continually updates structures about the program state in terms of exception handling. Typically, this adds a new element to the stack frame layout that knows what handlers are available for the function or method associated with that frame; if an exception is thrown, a pointer in the layout directs the runtime to the appropriate handler code. This approach is compact in terms of space, but adds execution overhead on frame entry and exit. It was commonly used in many Ada implementations, for example, where complex generation and runtime support was already needed for many other language features. Dynamic registration, being fairly straightforward to define, is amenable to proof of correctness. The second scheme, and the one implemented in many production-quality C++ compilers, is a table-driven approach. This creates static tables at compile and link time that relate ranges of theprogram counter to the program state with respect to exception handling. Then, if an exc eption is thrown, the runtime system looks up the current instruction location in the tables and determines what handlers are in play and what needs to be done. This approach minimizes executive overhead for the case where an exception is not thrown, albeit at the cost of some space, although said space can be allocated into read-only, special-purpose data sections that are not loaded or relocated until and unless an exception is thrown. This second approach is also superior in terms of achieving thread safety. Other definitional and implementation schemes have been proposed as well. For languages that support metaprogramming, approaches that involve no overhead at all have been advanced.

SUBPROGRAMS
Subprogram are complex structures in programming languages, and it follows from this a lengthy list of issues in their design. One obvious issue is the choice of parameter-passing method or methods that will be used. The wide variety of methods that have been used in various languages is a reflection of the diversity of opinion on the subject. Another related issue is whether the types of the parameters of subprograms, which are themselves passed as parameters, are checked. The nature of the local environment of a subprogram dictates to some degree the nature of the subprogram. The most important question is whether local variables are statically or dynamically allocated. As mentioned earlier, some languages allow subprogram names to be passed as parameters. One design issue is simply whether this is to be allowed in a language. If it is, that raises the question of what should be referencing environment of a subprogram that has been passed as a parameter. A language is meant to be useful for constructing significant software systems must allow the compilation of parts (as opposed to being required to compile only complete programs.) when some facility for this kind of compilation is provided, the next design issue is how flexible and reliable the mechanism should be. Two distinct approaches have been used, separate and independent compilation. Finally, there are the questions of whether subprograms can be overloaded or generic. An overloaded subprogram is one that has the same name as another subprogram in the same referencing environment. A generic subprogram is one whose computation can be done on data of different types with different calls. In computer science, a subroutine (also known as a procedure, function, routine, method, or subprogram) is a portion of code within a larger program that performs a specific task and is relatively independent of the remaining code. As the name "subprogram" suggests, a subroutine behaves in much the same way as a computer program that is used as one step in a larger program or another subprogram. A subroutine is often coded so that it can be started ("called") several times and/or from several places during a single execution of the program, including from other subroutines, and then branch back (return) to the next instruction after the "call" once the subroutine's task is done. Subroutines are a powerful programming tool, and the syntax of many programming languages includes support for writing and using them. Judicious use of subroutines (for example, through the structured programming approach) will often substantially reduce the cost of developing and maintaining a large program, while increasing its quality and reliability. Subroutines, often collected into libraries, are an important mechanism for sharing and trading software. The discipline of object-oriented programming is based on objects and methods (which are subroutines attached to these objects or object classes). In the compilation technique called threaded code, the executable program is basically a sequence of subroutine calls. Maurice Wilkes, David Wheeler, and Stanley Gill are credited with the invention of this concept, which they referred to as closed subroutine.

Main concepts
The content of a subroutine is its body, the piece of program code that is executed when the subroutine is called or invoked. A subroutine may be written so that it expects to obtain one or more data values from the calling program (its parameters or arguments). It may also return a computed value to its caller (its return value), or provide various result values or out(put) parameters. Indeed, a common use of subroutines is to implement mathematical functions, in which the purpose of the subroutine is purely to compute one or more results whose values are entirely determined by the parameters passed to the subroutine. (Examples might include computing the logarithm of a number or the determinant of a matrix.) However, a subroutine call may also have side effects, such as modifying data structures in the computer's memory, reading from or writing to a peripheral device, creating a file, halting the program or the machine, or even delaying the program's execution for a specified time. A subprogram with side effects may return different results each time it is called, even if it is called with the same arguments. An example is a random number function, available in many languages, that returns a different random-looking number each time it is called. The widespread use of subroutines with side effects is a characteristic of imperative programming languages. A subroutine can be coded so that it may call itself recursively, at one or more places, in order to perform its task. This technique allows direct implementation of functions defined by mathematical induction and recursive divide and conquer algorithms. A subroutine whose purpose is to compute a single boolean-valued function (that is, to answer a yes/no question) is called a predicate. In logic programming languages, often all subroutines are called "predicates", since they primarily determine success or failure. For example,any type of function is a subroutine but not main()

Advantages
The advantages of breaking a program into subroutines include:

decomposition of a complex programming task into simpler steps: this is one of the two main tools of structured programming, along with data structures. reducing the duplication of code within a program, enabling the reuse of code across multiple programs, dividing a large programming task among various programmers, or various stages of a project, hiding implementation details from users of the subroutine. improves traceability, i.e. most languages offer ways to obtain the call trace which includes the names of the involved subroutines and perhaps even more information such as file names and line numbers. By not decomposing the code into subroutines, debugging would be impaired severely.

Disadvantages

The invocation of a subroutine (rather than using in-line code) imposes some computational overhead in the call mechanism itself The subroutine typically requires standard housekeeping codeboth at entry to, and exit from, the function (function prologue and epilogueusually saving general purpose registers and return address as a minimum)

The General Semantics of Calls and Returns

The subprogram call and return operations of a language are together called its subprogram linkage. A subprogram call in a typical language has numerous actions associated with it. The call must include the mechanism for whatever parameter-passing method is used. If local vars are not static, the call must cause storage to be allocated for the locals declared in the called subprogram and bind those vars to that storage. It must save the execution status of the calling program unit. It must arrange to transfer control to the code of the subprogram and ensure that control to the code of the subprogram execution is completed. Finally, if the language allows nested subprograms, the call must cause some mechanism to be created to provide access to non-local vars that are visible to the called subprogram.

Implementing Simple Subprograms

Simple means that subprograms cannot be nested and all local vars are static. The semantics of a call to a simple subprogram requires the following actions: Save the execution status of the caller. Carry out the parameter-passing process. Pass the return address to the callee. Transfer control to the callee. The semantics of a return from a simple subprogram requires the following actions: If pass-by-value-result parameters are used, move the current values of those parameters to their corresponding actual parameters. If it is a function, move the functional value to a place the caller can get it. Restore the execution status of the caller. Transfer control back to the caller. The call and return actions require storage for the following: Status information of the caller, parameters,

return address, and functional value (if it is a function) These, along with the local vars and the subprogram code, form the complete set of information a subprogram needs to execute and then return control to the caller. A simple subprogram consists of two separate parts: The actual code of the subprogram, which is constant, and The local variables and data, which can change when the subprogram is executed. Both of which have fixed sizes.

The format, or layout, of the non-code part of an executing subprogram is called an activation record, b/c the data it describes are only relevant during the activation of the subprogram. The form of an activation record is static. An activation record instance is a concrete example of an activation record (the collection of data for a particular subprogram activation) B/c languages with simple subprograms do not support recursion; there can be only one active version of a given subprogram at a time. Therefore, there can be only a single instance of the activation record for a subprogram. One possible layout for activation records is shown below.

B/c an activation record instance for a simple subprogram has a fixed size, it can be statically allocated. The following figure shows a program consisting of a main program and three subprograms: A, B, and C.

The construction of the complete program shown above is not done entirely by the compiler.

In fact, b/c of independent compilation, MAIN, A, B, and C may have been compiled on different days, or even in different years. At the time each unit is compiled, the machine code for it, along with a list of references to external subprograms is written to a file. The executable program shown above is put together by the linker, which is part of the O/S. The linker was called for MAIN, and the linker had to find the machine code programs A, B, and C, along with their activation record instances, and load them into memory with the code for MAIN.

Implementing Subprograms with Stack-Dynamic Local Variables


One of the most important advantages of stack-dynamic local vars is support for recursion. More Complex Activation Records Subprogram linkage in languages that use stack-dynamic local vars are more complex than the linkage of simple subprograms for the following reasons: The compiler must generate code to cause the implicit allocation and deallocation of local variables Recursion must be supported (adds the possibility of multiple simultaneous activations of a subprogram), which means there can be more than one instance of a subprogram at a given time, with one call from outside the subprogram and one or more recursive calls. Recursion, therefore, requires multiple instances of activation records, one for each subprogram activation that can exist at the same time. Each activation requires its own copy of the formal parameters and the dynamically allocated local vars, along with the return address. The format of an activation record for a given subprogram in most languages is known at compile time. In many cases, the size is also known for activation records b/c all local data is of fixed size. In languages with stack-dynamic local vars, activation record instances must be created dynamically. The following figure shows the activation record for such a language.

B/c the return address, dynamic link, and parameters are placed in the activation record instance by the caller, these entries must appear first. The return address often consists of a ptr to the code segment of the caller and an offset address in that code segment of the instruction following the call. The dynamic link points to the top of an instance of the activation record of the caller. In static-scoped languages, this link is used in the destruction of the current activation record instance when the procedure completes its execution. The stack top is set to the value of the old dynamic link. The actual parameters in the activation record are the values or addresses provided by the caller. Local scalar vars are bound to storage within an activation record instance. Local structure vars are sometimes allocated elsewhere, and only their descriptors and a ptr to that storage are part of the activation record. Local vars are allocated and possibly initialized in the called subprogram, so they appear last. Consider the following C skeletal function:

void sub(float total, int part) {

int list[4]; float sum; }

The activation record for sub is: Activating a subprogram requires the dynamic creation of an instance of the activation record for the subprogram. B/c of the call and return semantics specify that the subprogram last called is the first to complete, it is reasonable to create instances of these activations records on a stack. This stack is part of the run-time system and is called run-time stack. Every subprogram activation, whether recursive or non-recursive, creates a new instance of an activation record on the stack. This provides the required separate copies of the parameters, local vars, and return address.

PARAMETER PASSING MECHANISMS


We have seem that procedures and functions may have formal parameters associated with them (parameters for functions and procedures). These formal parameters get instantiated with copies of the actual parameters when the procedure or function is called (routine invocation). So far we have assumed that parameters can only be passed to procedures/functions where they act as local constants (local to the procedure/function in question). If we want to return a value we must use a function. However, functions can only return a single value. Clearly there is a need to provide some mechanism whereby routines can return more than one value, or operate using actual parameters rather than copies. In fact most imperative programming languages (including Ada) also allow parameters to be passed back from procedures (other than through the use of a function call). Ada actually supports three different kinds of parameter:

In parameters which are use to pass data items to procedures/functions Out parameters which are used to return data items from procedures In-out parameters which are used to pass data items to procedures, where they are updated and then returned.

By default Ada assumes that all parameters are "in" parameters. These are then the kind of parameters we have been using up to now. To ensure the correct operation of each of the above Ada uses the following general parameter passing mechanisms: Call by constant value Call by result Call by copy-restore (also know as call by value-result)

Call by value
The initial value of a call by value parameter is the value of the corresponding argument (i.e. the value of the argument is copied into the parameter as part of calling the procedure). As this mechanism is very common, a short example should suffice: begin integer a; procedure byvalue(integer value w) begin write("w is ",w); w = w * 3; write("a is now ",a); write("w tripled is ",w); end; a = 2; byvalue(a); write("a is currently ",a);

byvalue(a*5); write("a's final value is ",a); end The output from this program would be w is 2 a is now 2 w tripled is 6 a is currently 2 w is 10 a is now 2 w tripled is 30 a's final value is 2 The call by value mechanism is used by languages like C and Java. Call by result A call by result parameter's initial value is undefined. When the procedure is finished, the final value of a call by result parameter is copied out to the argument (i.e. the argument gets the resulting value of the parameter as computed by the procedure). Here's an example: begin integer a, b; procedure byresult(integer value t, integer result w) begin write("t is ",t); // we can't print out the value of w here // as it is "undefined". w = t * 3; write("a is now ",a); end; a = 10; write("a is ",a); byresult(5,a); write("a is currently ",a); byresult(a,a); write("a's final value is ",a); end The output from this example would be a is 10 t is 5 a is now 5 a is currently 15 t is 15 a is now 15 a's final value is 45 A few notes are definitely in order:

changing the value of w while inside byresult has no effect on the value of the corresponding argument (i.e. a). since the final value of w is passed back out to the corresponding argument, the corresponding argument must be something which can be assigned to (i.e. an lvalue).

Call by result was implemented by a variety of early programming including Algol W. On a more modern note, Microsoft's C# programming language supports "out" parameters which provide call by result functionality. The second call to byresult illustrates one way to get an argument's value into a procedure and get a resulting value back into the same identifier. There's a much better way to do this which is the mechanism which we'll discuss next.

Call by value result


The call by value result mechanism is really just call by value combined with call by result. A call by value result parameter's initial value is the value of the corresponding argument and the final value of the parameter is copied out to the argument. Let's look at an example: begin integer a, b; procedure byvalueresult(integer value result w) begin write("w is ",w); w = w * 3; write("a is now ",a); end; a = 10; write("a is ",a); byvalueresult(a); write("a's final value is ",a); end The output of this program would be: a is 10 w is 10 a is now 10 a's final value is 30 A few notes are in order:

changing the value of w while inside by value result has no impact on the value of the corresponding argument (i.e. a) the argument for a call by value result parameter must be an l value

Call by reference
The call by reference mechanism is fundamentally different than the mechanisms discussed above. A call by reference parameter is a reference to theargument. As such, any use within a procedure of a call by reference parameter is actually a use of the argument. The name of this mechanism, call by reference, is a reflection of how the word reference is usually used instead of use. e.g. any reference within a procedure to a call by referenceparameter is actually a reference to the argument. Let's look at an example: begin integer a, i; integer v[10]; procedure byreference(integer reference w) begin i = i + 1; write("i is now ",i); write("w is ",w); w = w * 3; write("a is now ",a); write("v[1] is now ",v[1]); write("v[2] is now ",v[2]); end; a = 10; i = 1; v[1] = 10; v[2] = 20; write("a is ",a); byreference(a); write("a's final value is ",a); write("v[1] is ",v[1]); write("v[2] is ",v[2]); i = 1; byreference(v[i]); wite("v[1]'s final value is ",v[1]); write("v[2]'s final value is ",v[2]); end. The output of this program would be: a is 10 i is now 2 w is 10 a is now 30 v[1] is now 10 v[2] is now 20 a's final value is 30

v[1] is 10 v[2] is 20 i is now 2 w is 10 a is now 30 v[1] is now 30 v[2] is now 20 v[1]'s final value is 30 v[2]'s final value is 20 A few notes are (possibly) in order: assigning to w while inside byreference immediately changes the value of the corresponding argument (i.e. a) the argument for a call by reference parameter must be an lvalue just ignore the part dealing with the v array for now (it's purpose will become clear shortly)

Optimizing compiler writers don't much like call by reference as it can be very difficult to determine what impact a change to a global variable might have on call by reference parameters and vice-versa. For example, the compiler may have to assume that any assignment to a global variable could have modified the argument associated with any call by reference parameter whose type is the same as the global variable. Similarily, it may have to assume that any assignment to a call by reference parameter could have modified any global variable of the same type as the parameter in addition to possibly modifying the argument associated with any other call by reference parameter of the same type.

Call by name
The call by name mechanism is also fundamentally different than any of the mechanisms described above. A call by name parameter is an alias or alternative name for the corresponding argument. What this means is that any use of a call by name parameter is actually a use of the correspondingargument as it is expressed in the function call. Consider the following program: 1 begin 2 integer a, i; 3 integer v[10]; 4 procedure byname(integer name w) 5 begin 6 write("i is ",i); 7 write("w is ",w); 8 i = i + 1; 9 write("i is now ",i); 10 write("w is now ",w); 11 w = w * 3; 12 write("a is now ",a);

13 write("v[1] is now ",v[1]); 14 write("v[2] is now ",v[2]); 15 end; 16 a = 10; 17 i = 1; 18 v[1] = 10; 19 v[2] = 20; 20 write("a is ",a); 21 byname(a); 22 write("a's final value is ",a); 23 write("v[1] is ",v[1]); 24 write("v[2] is ",v[2]); 25 i = 1; 26 byname(v[i]); 27 write("v[1]'s final value is ",v[1]); 28 write("v[2]'s final value is ",v[2]); 29 end. The (line numbered) output of this program would be: 1 a is 10 2 i is 1 3 w is 10 4 i is now 2 5 w is now 10 6 a is now 30 7 v[1] is now 10 8 v[2] is now 20 9 a's final value is 30 10 v[1] is 10 11 v[2] is 20 12 i is 1 13 w is 10 14 i is now 2 15 w is now 20 16 a is now 30 17 v[1] is now 10 18 v[2] is now 60 19 v[1]'s final value is 10 20 v[2]'s final value is 60 A few notes are definitely in order:

the part of the program involving the scalar variable a are the same as for the call by reference example in the previous section. the part involving the array v is fundamentally different than the same part of the call by reference example in the previous section.

You might also like