Operating Systems: Placement Preparation
hi friends these are some faq's on operating system concepts as per collected from various years and companies papers . hope this will you in your preparations..
1. Explain the concept of Reentrancy.
It is a useful, memory-saving technique for multiprogrammed timesharing systems. A Reentrant Procedure is one in which multiple users can share a single copy of a program during the same period. Reentrancy has 2 key aspects:
i.) The program code cannot modify itself,
ii.) The local data for each user process must be stored separately.
Thus, the permanent part is the code, and the temporary part is the pointer back to the calling program and local variables used by that program. Each execution instance is called activation. It executes the code in the permanent part, but has its own copy of local variables/parameters. The temporary part associated with each activation is the activation record. Generally, the activation record is kept on the stack.
Note: A reentrant procedure can be interrupted and called by an interrupting program, and still execute correctly on returning to the procedure.
2. Explain Belady's Anomaly.
Also called FIFO anomaly. Usually, on increasing the number of frames allocated to a process' virtual memory, the process execution is faster, because fewer page faults occur. Sometimes, the reverse happens, i.e., the execution time increases even when more frames are allocated to the process. This is Belady's Anomaly. This is true for certain page reference patterns.
3. What is a binary semaphore? What is its use?
A binary semaphore is one, which takes only 0 and 1 as values. They are used to implement mutual exclusion and synchronize concurrent processes.
4. What is thrashing?
It is a phenomenon in virtual memory schemes, when the processor spends most of its time swapping pages, rather than executing instructions. This is due to an inordinate number of page faults.
5. List the Coffman's conditions that lead to a deadlock.
a) Mutual Exclusion: Only one process may use a critical resource at a time.
b) Hold & Wait: A process may be allocated some resources while waiting for others.
c) No Pre-emption: No resource can be forcible removed from a process holding it.
d) Circular Wait: A closed chain of processes exist such that each process holds at least one resource needed by another process in the chain.
6. What are short-, long- and medium-term scheduling?
Long term scheduler determines which programs are admitted to the system for processing. It controls the degree of multiprogramming. Once admitted, a job becomes a process.
Medium term scheduling is part of the swapping function. This relates to processes that are in a blocked or suspended state. They are swapped out of main-memory until they are ready to execute. The swapping-in decision is based on memory-management criteria.
Short term scheduler, also know as a dispatcher executes most frequently, and makes the finest-grained decision of which process should execute next. This scheduler is invoked whenever an event occurs. It may lead to interruption of one process by preemption.
7. What are turnaround time and response time?
Turnaround time is the interval between the submission of a job and its completion. Response time is the interval between submission of a request, and the first response to that request.
8. What are the typical elements of a process image?
a)User data: Modifiable part of user space. May include program data, user stack area, and programs that may be modified.
b) User program: The instructions to be executed.
c) System Stack: Each process has one or more LIFO stacks associated with it. Used to store parameters and calling addresses for procedure and system calls.
d) Process Control Block (PCB): Info needed by the OS to control processes.
9. What is the Translation Lookaside Buffer (TLB)?
In a cached system, the base addresses of the last few referenced pages is maintained in registers called the TLB that aids in faster lookup. TLB contains those page-table entries that have been most recently used. Normally, each virtual memory reference causes 2 physical memory accesses-- one to fetch appropriate page-table entry, and one to fetch the desired data. Using TLB in-between, this is reduced to just one physical memory access in cases of TLB-hit.
10. What is the resident set and working set of a process?
Resident set is that portion of the process image that is actually in main-memory at a particular instant. Working set is that subset of resident set that is actually needed for execution. (Relate this to the variable-window size method for swapping techniques.)
11. When is a system in safe state?
The set of dispatchable processes is in a safe state if there exist at least one temporal order in which all processes can be run to completion without resulting in a deadlock.
12. What is cycle stealing?
We encounter cycle stealing in the context of Direct Memory Access (DMA). Either the DMA controller can use the data bus when the CPU does not need it, or it may
force the CPU to temporarily suspend operation. The latter technique is called cycle stealing. Note that cycle stealing can be done only at specific break points in an instruction cycle.
13. What is meant by arm-stickiness?
If one or a few processes have a high access rate to data on one track of a storage disk, then they may monopolize the device by repeated requests to that track. This generally happens with most common device scheduling algorithms (LIFO, SSTF, C-SCAN, etc). High-density multi-surface disks are more likely to be affected by this, than the low density ones.
14. What are the stipulations of C2 level security?
C2 level security provides for:
1. Discretionary Access Control
2. Identification and Authentication
3. Auditing
4. Resource Reuse
15. What is busy waiting?
The repeated execution of a loop of code while waiting for an event to occur is called busy-waiting. The CPU is not engaged in any real productive activity during this period, and the process does not progress toward completion.
16. Explain the popular multiprocessor thread-scheduling strategies.
Load Sharing: Processes are not assigned to a particular processor. A global queue of threads is maintained. Each processor, when idle, selects a thread from this queue. Note that load balancing refers to a scheme where work is allocated to processors on a more permanent basis.
Gang Scheduling: A set of related threads is scheduled to run on a set of processors at the same time, on a 1-to-1 basis. Closely related threads / processes may be scheduled this way to reduce synchronization blocking, and minimize process switching. Group scheduling predated this strategy.
Dedicated processor assignment: Provides implicit scheduling defined by assignment of threads to processors. For the duration of program execution, each program is allocated a set of processors equal in number to the number of threads in the program. Processors are chosen from the available pool.
Dynamic scheduling: The number of thread in a program can be altered during the course of execution.
17. When does the condition 'rendezvous' arise?
In message passing, it is the condition in which, both, the sender and receiver are blocked until the message is delivered.
18. What is a trap and trapdoor?
Trapdoor is a secret undocumented entry point into a program, used to grant access without normal methods of access authentication. A trap is a software interrupt, usually the result of an error condition.
19. What are local and global page replacements?
Local replacement means that an incoming page is brought in only to the relevant process' address space. Global replacement policy allows any page frame from any process to be replaced. The latter is applicable to variable partitions model only.
20. Define latency, transfer and seek time with respect to disk I/O.
Seek time is the time required to move the disk arm to the required track. Rotational delay or latency is the time to move the required sector to the disk head. Sums of seek time (if any) and the latency is the access time, for accessing a particular track in a particular sector. Time taken to actually transfer a span of data is transfer time.
21. Describe the Buddy system of memory allocation.
Free memory is maintained in linked lists, each of equal sized blocks. Any such block is of size 2^k. When some memory is required by a process, the block size of next higher order is chosen, and broken into two. Note that the two such pieces differ in address only in their kth bit. Such pieces are called buddies. When any used block is freed, the OS checks to see if its buddy is also free. If so, it is rejoined, and put into the original free-block linked-list.
22. What is time stamping?
It is a technique proposed by Lamport, used to order events in a distributed system without the use of clocks. This scheme is intended to order events consisting of the transmission of messages. Each system 'i' in the network maintains a counter Ci. Every time a system transmits a message, it increments its counter by 1 and attaches the time-stamp Ti to the message. When a message is received, the receiving system 'j' sets its counter Cj to 1 more than the maximum of its current value and the incoming time-stamp Ti. At each site, the ordering of messages is determined by the following rules:
For messages x from site i and messages y from site j, x precedes y if one of the following conditions holds if Ti < Tj or if Ti = Tj and i < j.
23. How are the wait/signal operations for monitor different from those for semaphores?
If a process in the monitor signals and no task is waiting on the condition variable, the signal is lost. So this allows easier program design. Whereas in semaphores, every operation affects the value of the semaphore, so the wait and signal operations should be perfectly balanced in the program.
24. In the context of memory management, what are placement and replacement algorithms?
Placement algorithms determine where in the available main-memory to load the incoming process. Common methods are first-fit, next-fit, and best-fit. Replacement
algorithms are used when memory is full, and one process (or part of a process) needs to be swapped out to accommodate the new incoming process. The replacement algorithm determines which are the partitions (memory portions occupied by the processes) to be swapped out.
25. In loading processes into memory, what is the difference between load-time dynamic linking and run-time dynamic linking?
For load-time dynamic linking: Load module to be loaded is read into memory. Any reference to a target external module causes that module to be loaded and the references are updated to a relative address from the start base address of the application module.
With run-time dynamic loading: Some of the linking is postponed until actual reference during execution. Then the correct module is loaded and linked.
26. What are demand- and pre-paging?
With demand paging, a page is brought into the main-memory only when a location on that page is actually referenced during execution. With prepaging, pages other than the one demanded by a page fault are brought in. The selection of such pages is done based on common access patterns, especially for secondary memory devices.
27. What is mounting?
Mounting is the mechanism by which two different file systems can be combined together. This is one of the services provided by the operating system, which allows the user to work with two different file systems, and some of the secondary devices.
28. What do you mean by dispatch latency?
The time taken by the dispatcher to stop one process and start running another process is known as the dispatch latency.
29. What is multi-processing?
The ability of an operating system to use more than one CPU in a single computer system. Symmetrical multiprocessing refers to the OS's ability to assign tasks dynamically to the next available processor, whereas asymmetrical multiprocessing requires that the original program designer choose the processor to use for a given task at the time of writing the program.
30. What is multitasking?
Multitasking is a logical extension of multi-programming. This refers to the simultaneous execution of more than one program, by switching between them, in a single computer system.
31. Define multithreading?
The concurrent processing of several tasks or threads inside the same program or process. Because several tasks can be processed parallely and no tasks have to wait for the another to finish its execution.
32. Define compaction.
Compaction refers to the mechanism of shuffling the memory portions such that all the free portions of the memory can be aligned (or merged) together in a single large block. OS to overcome the problem of fragmentation, either internal or external, performs this mechanism, frequently. Compaction is possible only if relocation is dynamic and done at run-time, and if relocation is static and done at assembly or load-time compaction is not possible.
33. What do you mean by FAT (File Allocation Table)?
A table that indicates the physical location on secondary storage of the space allocated to a file. FAT chains the clusters (group of sectors) to define the contents of the file. FAT allocates clusters to files.
34. What is a Kernel?
Kernel is the nucleus or core of the operating system. This represents small part of the code, which is thought to be the entire operating system, it is most intensively used. Generally, the kernel is maintained permanently in main memory, and other portions of the OS are moved to and from the secondary storage (mostly hard disk).
35. What is memory-mapped I/O?
Memory-mapped I/O, meaning that the communication between the I/O devices and the processor is done through physical memory locations in the address space. Each I/O device will occupy some locations in the I/O address space. I.e., it will respond when those addresses are placed on the bus. The processor can write those locations to send commands and information to the I/O device and read those locations to get information and status from the I/O device. Memory-mapped I/O makes it easy to write device drivers in a high-level language as long as the high-level language can load and store from arbitrary addresses.
36. What are the advantages of threads?
Ø Threads provide parallel processing like processes but they have one important advantage over process, they are much more efficient.
Ø Threads are cheaper to create and destroy because they do not require allocation and de-allocation of a new address space or other process resources.
Ø It is faster to switch between threads. It will be faster since the memory-mapping does not have to be setup and the memory and address translation caches do not have to be violated.
Ø Threads are efficient as they share memory. They do not have to use system calls (which are slower because of context switches) to communicate.
37. What are kernel threads?
The processes that execute in the Kernel-mode that processes are called kernel threads.
38. What are the necessary conditions for deadlock to exist?
Ø Process claims exclusive control for the Resources allocated to them. (Mutual exclusion condition).
Ø Resources cannot be de-allocated until the process completes they are used for its complete execution. (No preemption condition).
Ø A process can hold one resource and wait for other resources to be allocated. (Wait for condition)
Ø Circular wait condition.
39. What are the strategies for dealing with deadlock?
Ø Prevention- Place restrictions on resource requests so that deadlock cannot occur.
Ø Avoidance- Plan ahead so that you never get in to a situation where deadlock is inevitable.
Ø Recovery- when deadlock is identified in the system, it recovers from it by removing some of the causes of the deadlock.
Ø Detection – detecting whether the deadlock actually exists and identifies the processes and resources that are involved in the deadlock.
40. Paging a memory management function, while multiprogramming a processor management function, are the two interdependent?
Yes.
41. What is page cannibalizing?
Page swapping or page replacements are called page cannibalizing.
42. What has triggered the need for multitasking in PCs?
Ø Increased speed and memory capacity of microprocessors together with the support fir virtual memory and
Ø Growth of client server computing
43. What are the four layers that Windows NT have in order to achieve independence?
Ø Hardware abstraction layer
Ø Kernel
Ø Subsystems
Ø System Services.
44. What is SMP?
To achieve maximum efficiency and reliability a mode of operation known as symmetric multiprocessing is used. In essence, with SMP any process or threads can be assigned to any processor.
45. What are the key object oriented concepts used by Windows NT?
Ø Encapsulation
Ø Object class and instance
46. Is Windows NT a full blown object oriented operating system? Give reasons.
No Windows NT is not so, because its not implemented in object oriented language and the data structures reside within one executive component and are not represented as objects and it does not support object oriented capabilities .
47. What is a drawback of MVT?
It does not have the features like
Ø ability to support multiple processors
Ø virtual storage
Ø source level debugging
48. What is process spawning?
When the OS at the explicit request of another process creates a process, this action is called process spawning.
49. How many jobs can be run concurrently on MVT?
15 jobs
50. List out some reasons for process termination.
Ø Normal completion
Ø Time limit exceeded
Ø Memory unavailable
Ø Bounds violation
Ø Protection error
Ø Arithmetic error
Ø Time overrun
Ø I/O failure
Ø Invalid instruction
Ø Privileged instruction
Ø Data misuse
Ø Operator or OS intervention
Ø Parent termination.
51. What are the reasons for process suspension?
Ø swapping
Ø interactive user request
Ø timing
Ø parent process request
52. What is process migration?
It is the transfer of sufficient amount of the state of process from one machine to the target machine
53. What is mutant?
In Windows NT a mutant provides kernel mode or user mode mutual exclusion with the notion of ownership.
54. What is an idle thread?
The special thread a dispatcher will execute when no ready thread is found.
55. What is FtDisk?
It is a fault tolerance disk driver for Windows NT.
56. What are the possible threads a thread can have?
Ø Ready
Ø Standby
Ø Running
Ø Waiting
Ø Transition
Ø Terminated.
57. What are rings in Windows NT?
Windows NT uses protection mechanism called rings provides by the process to implement separation between the user mode and kernel mode.
58. What is Executive in Windows NT?
In Windows NT, executive refers to the operating system code that runs in kernel mode.
59. What are the sub-components of I/O manager in Windows NT?
Ø Network redirector/ Server
Ø Cache manager.
Ø File systems
Ø Network driver
Ø Device driver
60. What are DDks? Name an operating system that includes this feature.
DDks are device driver kits, which are equivalent to SDKs for writing device drivers. Windows NT includes DDks.
61. What level of security does Windows NT meets?
C2 level security.
Section - I - File Management In Unix
1. What are the logical blocks of the UNIX file system?
Ø Boot block
Ø Super block
Ø Inode block
Ø Data block
2. What is an 'inode'?
All UNIX files have its description stored in a structure called 'inode'. The inode contains info about the file-size, its location, time of last access, time of last modification, permission and so on. Directories are also represented as files and have an associated inode. In addition to descriptions about the file, the inode contains pointers to the data blocks of the file. If the file is large, inode has indirect pointer to a block of pointers to additional data blocks (this further aggregates for larger files). A block is typically 8k.
Inode consists of the following fields:
Ø File owner identifier
Ø File type
Ø File access permissions
Ø File access times
Ø Number of links
Ø File size
Ø Location of the file data
3. How does the inode map to data block of a file?
Inode has 13 block addresses. The first 10 are direct block addresses and these addresses point to first 10 data blocks in the file.
The 11th address points to a one-level index block.
The 12th address points to a two-level (double in-direction) index block. The 13th address points to a three-level (triple in-direction) index block.
This mapping scheme provides a very large maximum file size with efficient access to large files, still small files are accessed directly in one disk read.
4. Brief about the directory representation in UNIX
A UNIX directory is a file containing a correspondence between filenames and inodes. A directory is a special file that the kernel maintains. Only kernel modifies directories, but processes can read directories. The contents of a directory are a list of filename and inode number pairs. When new directories are created, kernel makes two entries named '.' (refers to the directory itself) and '..' (refers to parent directory).
The system call for creating a new directory is mkdir (pathname, mode).
5. How are devices represented in UNIX?
All devices are represented by files that are called as special files. They are are located in ‗/dev‘ directory. Thus, device files and other files are named and accessed in the same way.
There are two types of such special files: 'block special files' and 'character special files'. A 'block special file' represents a device with characteristics similar to a disk (data transfer in terms of blocks). A 'character special file' represents a device with characteristics similar to a keyboard (data transfer is by stream of bits in sequential order).
6. What are the Unix system calls for I/O?
Ø open(pathname,flag,mode) - open file
Ø creat(pathname,mode) - create file
Ø close(filedes) - close an open file
Ø read(filedes,buffer,bytes) - read data from an open file
Ø write(filedes,buffer,bytes) - write data to an open file
Ø lseek(filedes,offset,from) - position an open file
Ø dup(filedes) - duplicate an existing file descriptor
Ø dup2(oldfd,newfd) - duplicate to a desired file descriptor
Ø fcntl(filedes,cmd,arg) - change properties of an open file
Ø ioctl(filedes,request,arg) - change the behaviour of an open file
The difference between fcntl anf ioctl is that the former is intended for any open file, while the latter is for device-specific operations.
7. How do you change File Access Permissions?
Every file has following attributes:
Ø owner's user ID ( 16 bit integer )
Ø owner's group ID ( 16 bit integer )
Ø File access mode word
'r w x - r w x - r w x'
(user permission - group permission - others permission)
r-read, w-write, x-execute.
To change the access mode, we use chmod(filename,mode).
Example 1:
To change mode of myfile to 'rw-rw-r--' (ie. read, write permission for user - read,write permission for group - only read permission for others) we give the args as:
chmod(myfile,0664) .
Each operation is represented by discrete values
'r' is 4
'w' is 2
'x' is 1
Therefore, for 'rw' the value is 6(4+2).
Example 2:
To change mode of myfile to 'rwxr--r--' we give the args as:
chmod(myfile,0744).
8. What are links and symbolic links in UNIX file system?
A link is a second name for a file. Links can be used to assign more than one name to a file, but they cannot be used to assign a directory more than one name or to link filenames on different computers.
Symbolic link 'is' a file that only contains the name of another file. Operation on the symbolic link is directed to the file pointed by the it. Both the limitations of links are eliminated in symbolic links.
Commands for linking files are:
Link ln filename1 filename2
Symbolic link ln -s filename1 filename2
9. What is a FIFO?
FIFO are otherwise called as 'named pipes'. FIFO (first-in-first-out) is a special file that is said to be ‗data transient‘. Once data is read from named pipe, it cannot be read again. Also, data can be read only in the order written. It is used in interprocess communication where a process writes to one end of the pipe (producer) and the other reads from the other end (consumer).
10. How do you create special files like named pipes and device files?
The system call mknod creates special files in the following sequence:
1. kernel assigns new inode,
2. sets the file type to indicate that the file is a pipe, directory or special file,
3. If it is a device file, it makes the other entries like major, minor device numbers.
For example: If the device is a disk, major device number refers to the disk controller and minor device number refers the disk.
11. Discuss the mount and unmount system calls
The privileged mount system call is used to attach a file system to a directory of another file system; the unmount system call detaches a file system. When you mount another file system on to your directory, you are essentially splicing one directory tree onto a branch in another directory tree. The first argument to mount call is the mount point, that is, a directory in the current file naming system. The second argument is the file system to mount to that point. When you insert a cdrom to your Unix system's drive, the file system in the cdrom automatically mounts to /dev/cdrom in your system.
12. What are surrogate super blocks and surrogate inode tables?
Whenever we use any file or change its permissions, these changes should be made on the disk; but this can be time consuming. Hence a copy of the super block and an inode table is maintained in the RAM that are called as the surrogate super blocks and inode tables respectively.
The ‗sync‘ command synchronizes the inode table in the memory with the one on the disk by simply overwriting the memory copy on to the disk.
13. Assuming the block size to be 1KB calculate the maximum size of a file in the Unix file system.
The first 10 data block pointers can point to 10 data blocks each of size 1 KB .
The 11 th pointer points to a table of block pointers the table has 256 pointers each pointing to data block of size 1 KB. Similarly the 12 th pointer can address
(256 X 256KB) i.e. 64 MB and the 13 th pointer (256 X 64 MB) => 16 GB. Hence the maximum size of the file is 10 KB + 256 KB + 64 MB + 16 GB.
14. What are the uses of these disk related commands: df, dfspace, du and ulimit?
$ df - reports the free as well as used disk space,
$ dfspace - same as df but is more explanatory,
$ du - shows the disk space used by a specified file,
$ ulimit - avoids the user from creating files of very large size.
Section – II
Process Management
1. Brief about the initial process sequence while the system boots up.
While booting, special process called the 'swapper' or 'scheduler' is created with the Process-ID 0. The swapper manages memory allocation for processes and influences CPU allocation. The swapper inturn creates 3 children:
Ø the process dispatcher,
Ø vhand and
Ø dbflush
with IDs 1,2 and 3 respectively.
This is done by executing the file /etc/init. Process dispatcher gives birth to the shell. Unix keeps track of all the processes in an internal data structure called the Process Table (listing command is ps -el).
2. What are various IDs associated with a process?
Unix identifies each process with an unique integer called ProcessID (PID). The process that executes the request for creation of a process is called the 'parent process' of the newly created process.
Every process is associated with a particular user called the 'owner' who initiates the process and has privileges over the process. The identification for the user is 'UserID'. Process also has 'Effective User ID' that determines the access privileges for accessing resources like files. The system calls used for getting the various IDs are:
getpid() - process id
getppid() - parent process id
getuid() - user id
geteuid() - effective user id
3. What is the range of values PID can take?
PID can range from 0 to 32767.
4. What are the process states in Unix?
As a process executes it changes state according to its circumstances. Unix processes have the following states:
Running : The process is either running or it is ready to run .
Waiting : The process is waiting for an event or for a resource.
Stopped : The process has been stopped, usually by receiving a signal.
Zombie : The process is dead but have not been removed from the process table.
5. What Happens when you execute a program?
When you execute a program on your UNIX system, the system creates a special environment for that program. This environment contains everything needed for the system to run the program as if no other program were running on the system. Each process has process context, which is everything that is unique about the state of the program you are currently running. Every time you execute a program the UNIX system does a fork, which performs a series of operations to create a process context and then execute your program in that context. The steps include the following:
Ø Allocate a slot in the process table, a list of currently running programs kept by UNIX.
Ø Assign a unique process identifier (PID) to the process.
Ø iCopy the context of the parent, the process that requested the spawning of the new process.
Ø Return the new PID to the parent process. This enables the parent process to examine or control the process directly.
After the fork is complete, UNIX runs your program.
6. What Happens when you execute a command?
When you enter 'ls' command to look at the contents of your current working directory, UNIX does a series of things to create an environment for ls and the run it: The shell has UNIX perform a fork. This creates a new process that the shell will use to run the ls program. The shell has UNIX perform an exec of the ls program. This replaces the shell program and data with the program and data for ls and then starts running that new program. The ls program is loaded into the new process context, replacing the text and data of the shell. The ls program performs its task, listing the contents of the current directory.
7. What is a zombie process?
When a program forks and the child finishes before the parent, the kernel still keeps some of its information about the child in case the parent might need it - for example, the parent may need to check the child's exit status. To be able to get this information, the parent calls `wait()'; In the interval between the child terminating and the parent calling `wait()', the child is said to be a `zombie' (If you do `ps', the child will have a `Z' in its status field to indicate this.)
8. What is a daemon process?
A daemon is a process that detaches itself from the terminal and runs, disconnected, in the background, waiting for requests and responding to them. It can also be defined as the background process that does not belong to a terminal session. Many system functions
are commonly performed by daemons, including the sendmail daemon, which handles mail, and the NNTP daemon, which handles USENET news. Many other daemons may exist. Some of the most common daemons are:
Ø init: Takes over the basic running of the system when the kernel has finished the boot process.
Ø inetd: Responsible for starting network services that do not have their own stand-alone daemons. For example, inetd usually takes care of incoming rlogin, telnet, and ftp connections.
Ø cron: Responsible for running repetitive tasks on a regular schedule.
Daemons can be roughly classified as system and user daemons.
9. What is an advantage of executing a process in background?
The most common reason to put a process in the background is to allow you to do something else interactively without waiting for the process to complete. At the end of the command you add the special background symbol, &. This symbol tells your shell to execute the given command in the background.
Example: cp *.* ../backup& (cp is for copy)
10. How do you execute one program from within another?
The system calls used for low-level process creation are execlp() and execvp(). The execlp call overlays the existing program with the new one , runs that and exits. The original program gets back control only when an error occurs.
execlp(path,file_name,arguments..);
//last argument must be NULL
A variant of execlp called execvp is used when the number of arguments is not known in advance.
execvp(path,argument_array);
//argument array should be terminated by NULL
11. List the system calls used for process management:
System calls Description
fork() To create a new process
exec() To execute a new program in a process
wait() To wait until a created process completes its execution
exit() To exit from a process execution
getpid() To get a process identifier of the current process
getppid() To get parent process identifier
nice() To bias the existing priority of a process
brk() To increase/decrease the data segment size of a process
12. Explain fork() system call.
The `fork()' used to create a new process from an existing process. The new process is called the child process, and the existing process is called the parent. We can tell which is which by checking the return value from `fork()'. The parent gets the child's pid returned to him, but the child gets 0 returned to him.
13. Predict the output of the following program code
main(){
fork();
printf("Hello World!");
}
Answer:
Hello World!Hello World!
Explanation:
The fork creates a child that is a duplicate of the parent process. The child begins from the fork().All the statements after the call to fork() will be executed twice.(once by the parent process and other by child). The statement before fork() is executed only by the parent process.
14. Predict the output of the following program code
main(){
fork(); fork(); fork();
printf("Hello World!");
}
Answer:
"Hello World" will be printed 8 times.
Explanation:
2^n times where n is the number of calls to fork()
15. How can you get/set an environment variable from a program?
Getting the value of an environment variable is done by using `getenv()'. Setting the value of an environment variable is done by using `putenv()'.
16. How can a parent and child process communicate?
A parent and child can communicate through any of the normal inter-process communication schemes (pipes, sockets, message queues, shared memory), but also have some special ways to communicate that take advantage of their relationship as a parent and child. One of the most obvious is that the parent can get the exit status of the child.
17. What is IPC? What are the various schemes available?
The term IPC (Inter-Process Communication) describes various ways by which different processes running on some operating system communicate between each other. Various schemes available are as follows:
Pipes:
One-way communication scheme through which different process can communicate. The problem is that the two processes should have a common ancestor (parent-child relationship). However this problem was fixed with the introduction of named-pipes (FIFO).
Message Queues :
Message queues can be used between related and unrelated processes running on a machine.
Shared Memory:
This is the fastest of all IPC schemes. The memory to be shared is mapped into the address space of the processes (that are sharing). The speed achieved is attributed to the fact that there is no kernel involvement. But this scheme needs synchronization.
Various forms of synchronisation are mutexes, condition-variables, read-write locks, record-locks, and semaphores.
18. Explain 'ps' command and its purpose.
The ps command prints the process status for some or all of the running processes. The information given are the process identification number (PID), the amount of time that the process has taken to execute so far etc.
The options used in this command are
$ ps –a :: Lists the processes running for other users.
$ ps –t :: Lists the processes running in a particular terminal.
$ ps –f :: Lists the processes along with PPID.
$ ps –e :: Lists every process running at that instance.
19. How would you kill a process?
The kill command takes the PID as one argument; this identifies which process to terminate. The PID of a process can be got using 'ps' command.
20. For some reason, the process with PID 6173 could not be terminated with the command ‗$ kill 6173‘. What could be the reason and how can you terminate that process?
The kill command when invoked sends a termination signal to the process being killed. Since the signal number is not specified unix assumes the default signal number which cannot kill certain high priority processes.
In such cases we can use the signal number 9 ‗$ kill –9 6173‘.
19. What is a shell?
A shell is an interactive user interface to services of an operating system, that allows an user to enter commands as character strings or through a graphical user interface. The shell converts them to system calls to the OS or forks off a process to execute the command. Results of the system calls and other information from the OS are presented to the user through an interactive interface. Commonly used shells are sh,csh,ks etc.
21. Explain about the process priority values.
Each process is assigned a priority value; higher the value lesser is its priority. The priority value for a process can range from 0 to 39. The default priority value for a process is 20. A user is allowed to increase the value but he cannot decrease it.
22. What does the command ‗$ nice –15 cat emp.dat‘ do?
The priority value of the cat emp.dat command is increased form 20 to 35 This will slower the command as higher priority value means lesser priority.
23. Write a command such that at exactly 5 pm the message ―time is 5 pm‖ appears on the terminal named tty3c.
$ at 17:00
echo ―time is 5 pm‖ > /dev/tty3c
ctrl d
$
24. What does the batch command do?
The batch command lets the system decide the best time for executing our commands. It may not execute the commands immediately. The batch command will be queued and executed when the system is free.
25. How is ‗crontab‗ command different from ‗at‘ ?
The crontab can carry out a submitted job every day for years together without any prompting form the user. The ‗at‘ command is valid only for a day.
Section - III
Memory Management
1. What is the difference between Swapping and Paging?
Swapping:
Whole process is moved from the swap device to the main memory for execution. Process size must be less than or equal to the available main memory. It is easier to implementation and overhead to the system. Swapping systems does not handle the memory more flexibly as compared to the paging systems.
Paging:
Only the required memory pages are moved to main memory from the swap device for execution. Process size does not matter. Gives the concept of the virtual memory.
It provides greater flexibility in mapping the virtual address space into the physical memory of the machine. Allows more number of processes to fit in the main memory simultaneously. Allows the greater process size than the available physical memory. Demand paging systems handle the memory more flexibly.
2. What is the major difference between the Historic Unix and the new BSD release of Unix System V in terms of Memory Management?
Historic Unix uses Swapping – entire process is transferred to the main memory from the swap device, whereas the Unix System V uses Demand Paging – only the part of the process is moved to the main memory. Historic Unix uses one Swap Device and Unix System V allow multiple Swap Devices.
3. What is the main goal of the Memory Management?
Ø It decides which process should reside in the main memory,
Ø Manages the parts of the virtual address space of a process which is non-core resident,
Ø Monitors the available main memory and periodically write the processes into the swap device to provide more processes fit in the main memory simultaneously.
4. What is a Map?
A Map is an Array, which contains the addresses of the free space in the swap device that are allocatable resources, and the number of the resource units available there.
This allows First-Fit allocation of contiguous blocks of a resource. Initially the Map contains one entry – address (block offset from the starting of the swap are and the total number of resources.
Kernel treats each unit of Map as a group of disk blocks. On the allocation and freeing of the resources Kernel updates the Map for accurate information.
5. What is a Region?
A Region is a continuous area of a process‘s address space (such as text, data and stack). The kernel in a ‗Region Table‘ that is local to the process maintains region. Regions are sharable among the process.
6. What are the events done by the Kernel after a process is being swapped out from the main memory?
When Kernel swaps the process out of the primary memory, it performs the following:
Ø Kernel decrements the Reference Count of each region of the process. If the reference count becomes zero, swaps the region out of the main memory,
Ø Kernel allocates the space for the swapping process in the swap device,
Ø Kernel locks the other swapping process while the current swapping operation is going on,
Ø The Kernel saves the swap address of the region in the region table.
7. Is the Process before and after the swap are the same? Give reason.
Process before swapping is residing in the primary memory in its original form. The regions (text, data and stack) may not be occupied fully by the process, there may be few empty slots in any of the regions and while swapping Kernel do not bother about the empty slots while swapping the process out.
After swapping the process resides in the swap (secondary memory) device. The regions swapped out will be present but only the occupied region slots but not the empty slots that were present before assigning.
While swapping the process once again into the main memory, the Kernel referring to the Process Memory Map, it assigns the main memory accordingly taking care of the empty slots in the regions.
8. What do you mean by u-area (user area or u-block)?
This contains the private data that is manipulated only by the Kernel. This is local to the Process, i.e. each process is allocated a u-area.
9. What are the entities that are swapped out of the main memory while swapping the process out of the main memory?
All memory space occupied by the process, process‘s u-area, and Kernel stack are swapped out, theoretically.
Practically, if the process‘s u-area contains the Address Translation Tables for the process then Kernel implementations do not swap the u-area.
10. What is Fork swap?
fork() is a system call to create a child process. When the parent process calls fork() system call, the child process is created and if there is short of memory then the child process is sent to the read-to-run state in the swap device, and return to the user state without swapping the parent process. When the memory will be available the child process will be swapped into the main memory.
11. What is Expansion swap?
At the time when any process requires more memory than it is currently allocated, the Kernel performs Expansion swap. To do this Kernel reserves enough space in the swap device. Then the address translation mapping is adjusted for the new virtual address space but the physical memory is not allocated. At last Kernel swaps the process into the assigned space in the swap device. Later when the Kernel swaps the process into the main memory this assigns memory according to the new address translation mapping.
12. How the Swapper works?
The swapper is the only process that swaps the processes. The Swapper operates only in the Kernel mode and it does not uses System calls instead it uses internal Kernel functions for swapping. It is the archetype of all kernel process.
13. What are the processes that are not bothered by the swapper? Give Reason.
Ø Zombie process: They do not take up any physical memory.
Ø Processes locked in memories that are updating the region of the process.
Ø Kernel swaps only the sleeping processes rather than the ‗ready-to-run‘ processes, as they have the higher probability of being scheduled than the sleeping processes.
14. What are the requirements for a swapper to work?
The swapper works on the highest scheduling priority. Firstly it will look for any sleeping process, if not found then it will look for the ready-to-run process for swapping. But the major requirement for the swapper to work the ready-to-run process must be core-resident for few seconds before swapping out. And for swapping in the process must have been resided in the swap device for few seconds. If the requirement is not satisfied then the swapper will go into the wait state on that event and it is awaken once in a second by the Kernel.
15. What are the criteria for choosing a process for swapping into memory from the swap device?
The resident time of the processes in the swap device, the priority of the processes and the amount of time the processes had been swapped out.
16. What are the criteria for choosing a process for swapping out of the memory to the swap device?
Ø The process‘s memory resident time,
Ø Priority of the process and
Ø The nice value.
17. What do you mean by nice value?
Nice value is the value that controls {increments or decrements} the priority of the process. This value that is returned by the nice () system call. The equation for using nice value is:
Priority = (―recent CPU usage‖/constant) + (base- priority) + (nice value)
Only the administrator can supply the nice value. The nice () system call works for the running process only. Nice value of one process cannot affect the nice value of the other process.
18. What are conditions on which deadlock can occur while swapping the processes?
Ø All processes in the main memory are asleep.
Ø All ‗ready-to-run‘ processes are swapped out.
Ø There is no space in the swap device for the new incoming process that are swapped out of the main memory.
Ø There is no space in the main memory for the new incoming process.
19. What are conditions for a machine to support Demand Paging?
Ø Memory architecture must based on Pages,
Ø The machine must support the ‗restartable‘ instructions.
20. What is ‗the principle of locality‘?
It‘s the nature of the processes that they refer only to the small subset of the total data space of the process. i.e. the process frequently calls the same subroutines or executes the loop instructions.
21. What is the working set of a process?
The set of pages that are referred by the process in the last ‗n‘, references, where ‗n‘ is called the window of the working set of the process.
22. What is the window of the working set of a process?
The window of the working set of a process is the total number in which the process had referred the set of pages in the working set of the process.
23. What is called a page fault?
Page fault is referred to the situation when the process addresses a page in the working set of the process but the process fails to locate the page in the working set. And on a
page fault, the kernel updates the working set by reading the page from the secondary device.
24. What are data structures that are used for Demand Paging?
Kernel contains 4 data structures for Demand paging. They are,
Ø Page table entries,
Ø Disk block descriptors,
Ø Page frame data table (pfdat),
Ø Swap-use table.
26. What are the bits(UNIX System V) that support the demand paging?
Valid, Reference, Modify, Copy on write, Age. These bits are the part of the page table entry, which includes physical address of the page and protection bits.
Page address Age Copy on write Modify Reference Valid Protection
27. Difference between the fork() and vfork() system call?
During the fork() system call the Kernel makes a copy of the parent process‘s address space and attaches it to the child process.
But the vfork() system call do not makes any copy of the parent‘s address space, so it is faster than the fork() system call. The child process as a result of the vfork() system call executes exec() system call. The child process from vfork() system call executes in the parent‘s address space (this can overwrite the parent‘s data and stack ) which suspends the parent process until the child process exits.
28. What is BSS(Block Started by Symbol)?
A data representation at the machine level, that has initial values when a program starts and tells about how much space the kernel allocates for the un-initialized data. Kernel initializes it to zero at run-time.
29. What is Page-Stealer process?
This is the Kernel process that makes rooms for the incoming pages, by swapping the memory pages that are not the part of the working set of a process. Page-Stealer is created by the Kernel at the system initialization and invokes it throughout the lifetime of the system. Kernel locks a region when a process faults on a page in the region, so that page stealer cannot steal the page, which is being faulted in.
30. Name two paging states for a page in memory?
The two paging states are:
Ø The page is aging and is not yet eligible for swapping,
Ø The page is eligible for swapping but not yet eligible for reassignment to other virtual address space.
31. What are the phases of swapping a page from the memory?
Ø Page stealer finds the page eligible for swapping and places the page number in the list of pages to be swapped.
Ø Kernel copies the page to a swap device when necessary and clears the valid bit in the page table entry, decrements the pfdata reference count, and places the pfdata table entry at the end of the free list if its reference count is 0.
33. What is page fault? Its types?
Page fault refers to the situation of not having a page in the main memory when any process references it.
There are two types of page fault :
Ø Validity fault,
Ø Protection fault.
34. What is validity fault?
If a process referring a page in the main memory whose valid bit is not set, it results in validity fault.
The valid bit is not set for those pages:
Ø that are outside the virtual address space of a process,
Ø that are the part of the virtual address space of the process but no physical address is assigned to it.
35. What do you mean by the protection fault?
Protection fault refers to the process accessing the pages, which do not have the access permission. A process also incur the protection fault when it attempts to write a page whose copy on write bit (UNIX System V) was set during the fork() system call.
36. In what way the Fault Handlers and the Interrupt handlers are different?
Fault handlers are also an interrupt handler with an exception that the interrupt handlers cannot sleep. Fault handlers sleep in the context of the process that caused the memory fault. The fault refers to the running process and no arbitrary processes are put to sleep.
37. What does the swapping system do if it identifies the illegal page for swapping?
If the disk block descriptor does not contain any record of the faulted page, then this causes the attempted memory reference is invalid and the kernel sends a ―Segmentation violation‖ signal to the offending process. This happens when the swapping system identifies any invalid memory reference.
38. What are states that the page can be in, after causing a page fault?
Ø On a swap device and not in memory,
Ø On the free page list in the main memory,
Ø In an executable file,
Ø Marked ―demand zero‖,
Ø Marked ―demand fill‖.
39. In what way the validity fault handler concludes?
Ø It sets the valid bit of the page by clearing the modify bit.
Ø It recalculates the process priority.
40. At what mode the fault handler executes?
At the kernel mode.
41. How the Kernel handles the copy on write bit of a page, when the bit is set?
In situations like, where the copy on write bit (UNIX System V) of a page is set and that page is shared by more than one process, the Kernel allocates new page and copies the content to the new page and the other processes retain their references to the old page. After copying the Kernel updates the page table entry with the new page number. Then Kernel decrements the reference count of the old pfdata table entry.
In cases like, where the copy on write bit is set and no processes are sharing the page, the Kernel allows the physical page to be reused by the processes. By doing so, it clears the copy on write bit and disassociates the page from its disk copy (if one exists), because other process may share the disk copy. Then it removes the pfdata table entry from the page-queue as the new copy of the virtual page is not on the swap device. It decrements the swap-use count for the page and if count drops to 0, frees the swap space.
42. For which kind of fault the page is checked first?
The page is first checked for the validity fault, as soon as it is found that the page is invalid (valid bit is clear), the validity fault handler returns immediately, and the process incur the validity page fault. Kernel handles the validity fault and the process will incur the protection fault if any one is present.
43. In what way the protection fault handler concludes?
After finishing the execution of the fault handler, it sets the modify and protection bits and clears the copy on write bit (all bits as in UNIX System V). It recalculates the process-priority and checks for signals.
44. How the Kernel handles both the page stealer and the fault handler?
The page stealer and the fault handler thrash pages because of the shortage of memory. If the sum of the working sets of all processes is greater that the physical memory then the fault handler will usually sleep because it cannot allocate pages for a process. This results in the reduction of the system throughput because Kernel spends too much time in overhead, rearranging the memory in a fast pace.
0 comments:
Post a Comment