Friday 2 January 2015

Characteristics of Computers

Computers have the following characteristics.

Speed

A computer is a very fast device. Computer makes calculations at an amazing speed without any mistakes. Even a microcomputer can make as many calculations in one minute that may take the entire life of a scientist or an engineer. A modern computer can execute several millions of instructions per second (MIPS). The speed of a computer is usually measured in microseconds, nanoseconds and pico seconds.

Accuracy

The accuracy of a computer is consistently high and the degree of accuracy of a particular computer depends upon its design. Error can occur in a computer, but these are mainly man-made, such as inaccurate data and imprecise thinking of programmer rather than technical weakness.

Diligence

Unlike human being, a computer is capable of performing almost any task provided that the task can be reduced to a series of steps.

Versatility

A computer is capable of performing almost any task provided that the task can be reduced to a series of logical steps.

High Storage Capacity

Computers can store large amount of data and information in a very small space and any part of it can be retrieved as and when required, in a microsecond or even less.

Reliability
Computers are designed in such a way that their individual components have very high life and degree of reliability. Computer makes calculations without tiring and making any mistakes.

No Intelligence
Computers possess no intelligence of their own. They have to be told what to do and in what sequence. Computers cannot make any judgement of their own. Their judgement is based on the instructions given to them in the form of programs.



Thursday 15 August 2013

Deadlock Avoidance

Deadlock prevention methods tend to be wasteful in terms of machine resources. Deadlock avoidance attempts to predict the possibility of a deadlock dynamically as each resource is made. The basic idea of deadlock avoidance is to grant only those requests for available resources that cannot possibly result in a state of deadlock. This strategy is usually implemented by checking the effect of granting a particular resource request by the resource allocator.  If granting of the resource cannot possibly lead to deadlock, the resource is granted to the requester. Otherwise, the requesting process is suspended until its pending request can be safely granted.

The most famous deadlock avoidance algorithm is Dijikstra’s Banker’s Algorithm. The algorithm is called by this name because it involves a banker who makes loan and receives payments from a given source of capital. The banker will only grant a loan if he can still meet the needs of other customers, based on their projected future loan requirements.

In order to evaluate the safety of the individual system states, deadlock avoidance algorithms generally require all processes to preclaim their maximum resource requirements prior to execution. Once its execution begins each process requests its resources as and when needed, up to the maximum preclaimed limit. Processes whose preclaimed resource requirements exceed the total capacity of the system are not admitted for execution.

Wednesday 14 August 2013

Deadlock Handling

There are three strategies for dealing with deadlock:
·        Deadlock Prevention
·        Deadlock Avoidance
·        Deadlock Detection and Recovery

Deadlock Prevention

Since four conditions are required to produce a deadlock, one can infer that if one or more of the conditions is negated, then deadlock can be prevented. Thus, the basic idea of deadlock prevention is to deny at least one of the four necessary conditions for deadlock. The mutual exclusion condition cannot be disallowed, so it is customary to prevent one or more of the remaining three conditions. Deadlock prevention methods tend to be wasteful in terms of machine resources.

Denying Hold and Wait

The hold and wait condition can be eliminated by requiring or forcing a process to release all resources held by it whenever it request a resource that is not available. In other words deadlocks are prevented because waiting processes are not holding any resources. There are basically two possible implementations of this strategy:

·        The process requests all needed resources at one time prior to the commencement of the execution
·        The process requests resources incrementally in the course of execution but release all its resource holding upon encountering a denial.

Sunday 30 June 2013

UGC NET Computer Science June 2013 Answer key

1. The software maturity index (SMI) is defined as SMI= [Mf - (Fa + Fc + Fd )]/Mf
Where
Mf = the number of modules in the current release.
Fa = the number of modules in current release that have been added.
Fc = the number of modules in current release that have been changed.
Fd = the number of modules in current release that have been deleted.
The product begins to stabilize when
(   A)   SMI approaches 1
(   B)   SMI approaches 0
    (C)   SMI approaches -1
    (D)   None of the above

Answer: (A) SMI approaches 1

Reference: Information Technology Project Management A Concise Study 2Nd Ed. By Kelkar, Page 509


2. Match the following:
 A . Watson-Felix model                     i. Failure Intensity
B. Quick-Fixx Model                          ii. Cost Estimation
C. Putnam resource allocation model   iii. Project Planning
d. Logarithmatic Poisson Model         iv. Maintenance

Answer: (d) a - ii    b - iv      c - iii    d – i

3. ------ is a process model that removes defects before they can precipitate serious hazards.
(A) Incremental Model
(B) Spiral Model
c) Cleanroom software engineering
d) Agile model
Answer: c) Cleanroom software engineering is a process model that removes defects before they can precipitate serious hazards.

Reference: Software Engineering, Pressman, Page 700

4.  Equivalence partitioning is a -------------------method that divides the input domain of a program into classes of data from which test cases can be derived

    A)     white-box testing
   B)      black-box testing
   C)      orthogonal array testing
    D)     stress testing
Answer: b) black-box testing
Reference: Software Engineering, Pressman, Page 463

5.  The following three golden rules:
1. Place the user in control.
2. Reduce the user’s memory load.
3. Make the interface consistent are for
A) User satisfaction
b) Good interface design
c) saving system`s resources
d) None of these

Answer: b) Good interface design

Reference: Software Engineering, Pressman, Page 402
These golden rules actually form the basis for a set of user interface design principles that guide this important software design activity.

6. Software safety is a ---------------------- activity that focuses on the identification and assessment of potential hazards that may affect software negatively and cause an entire system to fail.

    a)      Risk mitigation, monitoring, and management
    b)      Software quality assurance
    c)      Software cost estimation
    d)      Defect removal efficiency

Answer: b) software quality assurance
Reference: Software Engineering, Pressman, Page 213


Next: Question and answer 7-15 coming soon

Thursday 16 May 2013

Deadlocks


A process is said to be deadlocked when it is waiting for an event which will never occur. A deadlock is a situation where a group of processes is permanently blocked as a result of each process having acquired a subset of the resources needed for its completion and waiting for release of remaining resources held by others in the group- thus making it impossible for any of the process to proceed.

A deadlock involving a set of processes D is said to have occurred if
·         Every process Pi in D is blocked on some event ei, and
·         Event ei can only be caused by some process(es) in D.

Deadlocks can occur in concurrent environments as a result of uncontrolled granting of system resources to requesting processes. As an example, consider a system consisting of one printer and a file F on a tape and two processes Pi and Pj. Both pi and pj requires the tape and printer devices for their functioning. Suppose that the processes make their resources requests in the following order.

Tuesday 14 May 2013

Semaphores


In 1965, Dijkstra proposed a new and very significant technique for mutual exclusion among an arbitrary number of processes by using the value of simple shared integer variable. A semaphore is a shared variable with non negative values which can only be subjected to the following operations:
Initialization, specified as part of its declaration, and
Indivisible atomic operations wait and signal.

The wait and signal operations were originally termed P (for wait; from the Dutch ‘proberen’, to test) and V (for signal, from ‘verhogen’, to increment). Indivisibility of the wait and signal operations implies that only one wait or signal operation can be in progress on a semaphore at any time and once started, the execution of the wait or signal operation must be completed without interruption. Operations wait and signal can be defines as follow:

The wait operation on semaphore ‘S’, written wait(S), operates as follows:
            If S>0 then
                        S=S-1;
            Else
                        Block the process executing wait operation, that is wait on S.
            Endif

Solutions to Mutual Exclusion


Three basic approaches are available for the implementation of mutual exclusion problem.
1.   Software Solution
2.   Hardware Solution
3.   As mutual exclusion primitives (Semaphore)
The software implementation depends on an arrangement of checks to ensure mutual exclusion. The correctness of the critical section implementation thus depends on the correctness of these checks, and it is hard to prove due to complexity of checks. This is a serious drawback in the development of large concurrent systems. However, the major advantage of the software approach is that no special hardware or software is required to implement a critical section.
Hardware solution use special hardware instructions implemented in the form of language constructs for process synchronization. Since the language constructs has well defined semantics which is enforced by the language compiler, construction of large concurrent systems becomes more practical.
A set of primitives for mutual exclusion (eg P and V primitives of Dijkstra) developed to overcome the complexity of software approach. These primitives could be implemented either in the form of busy-waiting (software solution or hardware solution) or as operating system services invoked by system calls.
Both hardware and solutions of critical section are implemented in the form of busy-waiting. In busy-waiting, a process is in a tight loop, checking a particular condition until it is allowed to enter its critical section. Thus, if busy waiting is employed, while a process is waiting for access to a critical section, it continuous to

Wednesday 10 April 2013

Critical Section and Mutual Exclusion

Concurrent processes within a system may have to cooperate or interact in many situations to share the common resources such as data structure or physical devices. Cooperating processes must synchronize with each other to prevent the concurrency related timing problem due to concurrent accessing of shared devices. Without adequate inter-process synchronization, updating of shared variables can induce concurrency related timing errors that are often difficult to debug. One of the primary causes of this problem is the possibility that concurrent process may observe temporarily inconsistent value of a shared variable while it is being accessed and/or updated.

A critical section for a data item d is a section of code which cannot be executed concurrently with itself or other critical section for d. Thus, critical section is a sequence of instructions with a marked beginning and end. When a process enters a critical section, it must complete all instructions therein before any other process is allowed to enter the same critical section.  Only the process executing the critical section is allowed access to the shared variables and all other processes should be prevented from doing so until the completion of the critical section. This is often referred to as mutual exclusion, in which a single process temporarily excludes all other from using a shared resource in order to ensure the system`s integrity.

Tuesday 19 March 2013

Concurrent Processes and Process Synchronization


Processes are concurrent if they exist at the same time in a system. Concurrent processes can function completely independently of one another. Concurrent processes will compete each other for system resources. However, concurrent processes may require occasional synchronization and cooperation to share common resources such as data structure or physical devices. Cooperating concurrent processes must synchronize with each other when they have to use shared resources. Concurrent processes can share all the resources that are serially reusable.

A serially reusable resource can be used at most one process at a time. Its state and operation can be corrupted if manipulated concurrently and without synchronization by more than one process. Examples include read/write shared variables and devices such as printer. Thus, the cooperating concurrent processes must synchronize with each other to preserve the precedence relationships and to prevent the concurrency related timing problems. Failure to devise the appropriate synchronization protocols, and to enforce their use by each process that uses common recourse, often results in erratic system behaviour and crashes that are notoriously difficult to debug.  

Monday 18 March 2013

MLQ with Feedback Scheduling


In MLQ with feedback, rather than having fixed classes of processes allocated to specific queues, the idea is to make traversal of a process through the system dependent on its run time behavior.
MLQ with Feedback Scheduling
An Example of Multilevel Queue with Feedback Scheduling

Sunday 17 March 2013

Multiple- Level Queues (MLQ) Scheduling


MLQ scheduling is the most general scheduling disciplines suitable for complex environments that serve a mixture of  processes with different characteristics. One way to implement MLQ scheduling is to classify the workload according to its characteristics, and to maintain separate process queue serviced by different schedulers. A division of the workload might be into system processes, interactive processes, and batch processes.

multilevel queue scheduling algorithm divids the ready queue into several queues. The processes are permanently assigned to one queue, generally based on some property of the process, such as memory size, process priority, or process type. Each queue has its own scheduling algorithm. For example, separate queues might be used for foreground and background processes. The foreground might be scheduled by an RR algorithm, while the background queue is scheduled by an FCFS algorithm.

Friday 15 March 2013

High-response-Ratio Next (HRN) Policy


HRN policy corrects some of the weakness in LCN and SRTN policies, particularly the excessive discrimination towards longer jobs and excessive favouritism towards short new jobs. HRN is a non-preemptive scheduling discipline I n which the priority of each job is a function not only of the job’s service time but also of the amount of time the job has been waiting for service. Once a job gets the CPU, it runs to completion. Dynamic priorities in HRN are calculated as follows;

Response Ratio =  (time waiting + execution time received so far)
/ (execution time received so far)

HRN policy always selects the process with highest response ratio for execution. Since the execution time received  appears in the denominator, shorter jobs will get preference. But because time waiting appears in

Saturday 9 March 2013

Least Completed Next (LCN) Scheduling

The LCN is preemptive scheduling policy and it always schedules the request that has consumed the least amount of processor time of all requests existing in the system. Thus, the nature of the job, whether CPU-bound or I/O-bound, does not influence its progress in the system. All requests now make approximately equal progress in terms of the processor time consumed by them and short jobs are guaranteed to finish ahead of long requests. This policy also has the drawback of starving the long job of CPU attention. 

Short Time to Go (STG) or Shortest Remaining Time Next (SRTN) Scheduling 

SRTN is the preemptive version of SJN policy and is useful in time-sharing system. In SRTN, the process with the smallest estimated run-time to completion is run next, including the new arrivals. Thus in SRTN, a

Friday 8 March 2013

Round Robin (RR) Scheduling


Round Robin scheduling is one of the commonly used preemptive scheduling in interactive environments. The processor time is divided into slice or quanta, which are allocated to requestors. No process can run more than one time slice where there are other processes waiting in the ready queue.

RR scheduling is similar to FCFS scheduling, but preemption is added to switch between processes. The time slice is range from 10 to 100 milliseconds.

 RR Scheduling
 RR Scheduling

Thursday 7 March 2013

Deadline Scheduling in Operating System

In deadline scheduling certain jobs are scheduled to be completed by specific time or deadline. These jobs may have very high value if delivered on time and may be worthless if delivered later than deadline. A deadline scheduling is complex for many reasons:
·        The system must run the deadline requests without severely degrading services to other users.
·        The user must supply the precise resource requirements of the job in advance.
·        The intensive resource management required by deadline scheduling may generate substantial overhead.
·        If many deadline requests are to be activated at once, scheduling becomes more complex that sophisticated optimization methods might be needed to ensure that deadlines are met.

Wednesday 6 March 2013

Shortest Job First Scheduling


Shortest Job First (SJF) scheduling is a non-preemptive discipline in which the scheduler always schedules the shortest of the arrived requests. Thus, a request remains pending until all shorter request have been serviced. SJF favours short jobs at the expense of longer ones.

SJF selects jobs for service in a manner that ensures the next job will complete and leave the system as soon as possible. This tend to reduce the number of waiting jobs, and also reduces the number of jobs waiting behind large jobs. As a result, SJF can minimize the average waiting time of jobs as they pass through the system. However, waiting times have a larger variance than FCFS, especially for large jobs.

SJF is very difficult to implement because it requires the precise knowledge of how long a request will run and this information is not usually available in advance. The best SJF can do is to depend on user estimates of run times. Relying on user estimates leave the system open to abuse or manipulation, as user might try to

Tuesday 5 March 2013

First Come First Served FCFS Scheduling


Scheduling Policies
A variety of scheduling policies are available; each of them is suited to the needs of a particular type of scheduler. In general, scheduling disciplines may be preemptive or non-preemptive.
A non-preemptive policy always processes a scheduled request to completion. In non-preemptive scheduling , the running process retains the ownership of allocated resources including the processor, until it voluntarily surrenders control to the operating system. Non-preemptive scheduling is attractive due to its simplicity. Since the scheduling is performed only when the previously scheduled request completes, the list of pending requests never contains a preempted request.
With  pre-emptive scheduling,  a running process may be replaced by a higher-priority process at any time. This is accomplished by activating the scheduler whenever an event that changes the state of the system is detected. A preemptive scheduling is much responsive as compared to non-preemptive, but it imposes higher overheads to the operating system since each process rescheduling requires a complete process switch.

Monday 4 March 2013

Scheduling


Scheduling is the activity of determining which request should handled next by a server. In operating system, all requests waiting to be serviced are kept in a list of waiting requests. Whenever scheduling is to be performed, the scheduler examines the pending requests and selects one for servicing. This request is handed over to the CPU. A request leaves the CPU when it completes, or when it is preempted by the scheduler, in which case it is put back into the list of pending requests. In either situation, the scheduler performs scheduling to select the next request to be serviced.

Levels of Scheduling


Three levels of scheduling in an operating system are long-term scheduling, mid-term scheduling and short-term scheduling.

Process States


A process goes through a series of discrete process states. The five general categories of process states are:
·        New
·        Ready
·        Running
·        Blocked
·        Terminated

Some authors are describes a process has four states (by eliminating New state).
The above five states are briefly explained below.
Process States
Process States

Wednesday 27 February 2013

Process Management


Process management deals with the assignment of processor to different tasks being performed by the computer system. The processor management module of the operating system consists of three components: job scheduler, which deals with the management of jobs, process scheduler, which deals with the management of processor, Traffic Controller¸  which keeps track of the status of the processes.

A job is a sequence of job steps, each job step being the execution of a program. For example, a job involving the compilation and execution of a program written in a high level language involves the following steps:
·        Execution of the compiler to compile the user’s program
·        Execution of the linker and loader to prepare and load executable pro gram
·        Execution of the user’s program