My Recent Interview Questions Related to Data Structures Algorithms and Java
My Recent Interview Questions Related to Data Structures Algorithms and Java

Here is the list of questions asked to me in Microsoft, Salseforce, Kony Labs and other companies.

  • Find key in a given rotated sorted array.
  • Check the validity of a BST.
  • Find a given word in 2-D matrix and print path if it exists.
  • Print longest palindrome length in given string.

An ArrayList is ordered collection (also known as a sequence). It means that the user of controls where each element is inserted. This class is very similar to the Vector class, excepting that it is not synchronized. It must be synchronized externally.
An ArrayList is better than Array to use when you have no knowledge in advance about elements number. ArrayList are slower than Arrays. So, if you need efficiency try to use Arrays if possible.
Short Overview on main features of ArrayList:

  • - The add operation runs O(n) time.
  • - The isEmpty, size, iterator, set, get and listIterator operations require the same amount of time, independently of element you access.
  • - Only Objects can be added to an ArrayList
  • - Implements all methods from the List interface
  • - Permits null elements
An ArrayList could be a good choice if you need very flexible Array type of collection with limited functionality.
One small remark for those who needs faster ArrayList. ArrayLists internally implemented as an Array. So when run an "add" command a new Array will be created with n+1 dimension. All "older" elements will be copied to first n elements and last n+1 one will filled with the value which you provide with add() method. To avoid that internal re-copying of Arrays you should use ensureCapacity(int requestCapacity) method - it will create an array only once.
If you have higher requirements on number of accessible methods for your Array like collection instead of ArrayList you have better choice - LinkedList.
LinkedList is much more flexible and lets you insert, add and remove elements from both sides of your collection - it can be used as queue and even double-ended queue!
Internally a LinkedList does not use arrays, it is much more modern! LinkedList is a sequence of nodes, which are double linked. Each node contains header, where actually objects are stored, and two links or pointers to next or previous node. A LinkedList looks like a chain, consisting of people who hold each other's hand. You can insert people or node into that chain or remove. Linked lists permit node insert/remove operation at any point in the list in constant time.
LinkedList is actually luxury extension of an ArrayList: it gives you many methods which you usually implement yourself around of different type of collections.
For example if you need to add new element to the end of an ArrayList you have to look at the size of the collection and then add that new element at n+1 place. In LinkedList you do it directly by addLast(E e) method.
Another example. Depending on your expectations you can chose either getFirst/getLast or  peekFirst/peekLast methods. Last "peek" methods are completelly new sort of methods and introduced in Java 1.6.
They slightly differs from set/get methods - they do not throw any kind of exceptions if this list is empty, return just null.
New "poll" methods - pollFirst and pollLast combine two methods: get and remove an element from your collection. For example pollFirst() method gets and removes the first element from this list. This method does throw any kind of exception like IndexOutOfBoundsException in case of ArrayList, instead it just returns null (if this list is empty).
Because the LinkedList implements queue interface you can pop and push new elements from/into your collection.
If you created capacity restricted collection you can "examine" the result of an operation by using offerFirst/offerLast methods. They are also new (since Java 1.6) and return boolean result on the operations instead of throwing IllegalStateException as addFirst/addLast methods do. In case if possible to insert an element at the beginning/end of collection you you get "true" result.

Vector is similar to ArrayList with the difference that it is synchronized. It offers some other benefits like it has an initial capacity and an incremental capacity. So if your vector has a capacity of 10 and incremental capacity of 10, then when you are adding the 11th element a new Vector would be created with 20 elements and the 11 elements would be copied to the new Vector. So addition of 12th to 20th elements would not require creation of new vector
Hashtable is basically a datastructure to retain values of key-value pair.

  • It didn’t allow null for both key and value. You will get NullPointerException if you add null value.
  • It is synchronized. So it comes with its cost. Only one thread can access in one time

Like Hashtable it also accepts key value pair.
  • It allows null for both key and value
  • It is unsynchronized. So come up with better performance

HashSet does not allow duplicate values. It provides add method rather put method. You also use its contain method to check whether the object is already available in HashSet. HashSet can be used where you want to maintain a unique list.

Collection is the root interface in collection hierarchy,groups multiple elements into a single unit, it allows duplicate & non-duplicate elements  which may be ordered or unordered.
Collections is a class which extends Object class & it consists exclusively static methods .It is a member of Java Collections Framework.Collections are used to store,retrieve, manipulate, and communicate aggregate data

Collection is the interface. which can be implemented List,set,Queue.This interface contain only instance methods.
Collections is the class .This class contain utility methods such as all algorithm oriented methods.This class contain only static methods.
Other nonabstract methods can access a method that you declare as abstract.
But first, let's look at when to use normal class definitions and when to use interfaces. Then I'll tackle abstract classes.
Class vs. interface
Some say you should define all classes in terms of interfaces, but I think recommendation seems a bit extreme. I use interfaces when I see that something in my design will change frequently.
For example, the Strategy pattern lets you swap new algorithms and processes into your program without altering the objects that use them. A media player might know how to play CDs, MP3s, and wav files. Of course, you don't want to hardcode those playback algorithms into the player; that will make it difficult to add a new format like AVI. Furthermore, your code will be littered with useless case statements. And to add insult to injury, you will need to update those case statements each time you add a new algorithm. All in all, this is not a very object-oriented way to program.
With the Strategy pattern, you can simply encapsulate the algorithm behind an object. If you do that, you can provide new media plug-ins at any time. Let's call the plug-in class MediaStrategy. That object would have one method: playStream(Stream s). So to add a new algorithm, we simply extend our algorithm class. Now, when the program encounters the new media type, it simply delegates the playing of the stream to our media strategy. Of course, you'll need some plumbing to properly instantiate the algorithm strategies you will need.
This is an excellent place to use an interface. We've used the Strategy pattern, which clearly indicates a place in the design that will change. Thus, you should define the strategy as an interface. You should generally favor interfaces over inheritance when you want an object to have a certain type; in this case, MediaStrategy. Relying on inheritance for type identity is dangerous; it locks you into a particular inheritance hierarchy. Java doesn't allow multiple inheritance, so you can't extend something that gives you a useful implementation or more type identity.
Interface vs. abstract class
Choosing interfaces and abstract classes is not an either/or proposition. If you need to change your design, make it an interface. However, you may have abstract classes that provide some default behavior. Abstract classes are excellent candidates inside of application frameworks.
Abstract classes let you define some behaviors; they force your subclasses to provide others. For example, if you have an application framework, an abstract class may provide default services such as event and message handling. Those services allow your application to plug in to your application framework. However, there is some application-specific functionality that only your application can perform. Such functionality might include startup and shutdown tasks, which are often application-dependent. So instead of trying to define that behavior itself, the abstract base class can declare abstract shutdown and startup methods. The base class knows that it needs those methods, but an abstract class lets your class admit that it doesn't know how to perform those actions; it only knows that it must initiate the actions. When it is time to start up, the abstract class can call the startup method. When the base class calls this method, Java calls the method defined by the child class.

Here we are giving chapter wise notes of operating system based on operating system concepts by galvin

these are listed under the fallowing headings::-


Computer System Stuctures

O S Structures

CPU Scheduling




Virtual Memory

Reblog this post [with Zemanta]

Pages>> 1 2 3 4 5 6 7 8
Here we are giving chapter wise notes of operating system based on operating system concepts by galvin

Virtual Memory

q) how virtual memory is different from other memory management techniques?
a) it allows the execution of the process that need not be comletely in memory.

q) what are its advantages?
a) the process may be larger than the physical memory.allows processes to easily share files and address space.

note:- entire program is not needed always, onlysometimes:-
1) code that handles exceptions.
2) arrays ,lists,tables are often allocated more memory, than actually needed.

benefits of keeping in memory only the required instructins and data:-
1) more programs can be there in memory, more cpu uilization and throughput.
2) less I/O required to load or swap the processes.

note:- systems that support virtual memory, donot bother programmer to design overlays.

q) what is key concept behind virtual memory?
a) demand paging. demand segmentation can also be used, but segment replacement algorithms are very complex as they have variable sizes.

note:- here we can use pager instead of a process needs to be in memory ,before its execution starts.but now instead of bringing all pages into memory, pager does not bring a page unless it is begining pager makes a guess,which pages will be used and loads only those pages into memory, thus decreasing the swap time, and memory needed.

q) how we modify page table to know that which pages are in memory and which are not?
a) help of valid-invalid bit is taken.if a bit is valid, it means it is in logical address space and also is present in memory, if its invalid , it means either the page in not a valid logical address or right now its on disk and not in memory.

note:- page table entry for page on disk , contains invalid bit, or address of page on disk.

q) what is page - fault trap?
a) access to page that is currently not in memory is called page-fault.mapping hardware then sends trap to the o.s.

q) what is pure demand paging?
a) this concept says that never bring a page into memory until it is required.a process could start with no page in the memory.

q) what is swap space?
a) the portion of disk that contains the pages not in memory.

note:- if a page fault occurs when fetching a instruction , it needs to be fetched again. if page fault occurs when fetching an operand, then fetch and decode of instruction need to be done again.

note:-trying to increase degree of multiprogramming can increase the page fault rate.

q) what is modify/dirty bit?
a) when a page fault occurs and no free frame is there, we need to swap out a victim page and bring in the desired page.this takes double time to service page fault ie swapin+swapout. to reduce it we use modify is attached to each frame in physical memory. if frame is modified it is set. in this case it needs to be written back to memory. but if not changed only we swap in desired page overwritting only pages help a lot in reducing the page fault service time, if they are the victims.

q) what do frame allocation algorithms do?
a) decide how many frames are to be allocated to a also has a lot of affect on the page fault rate.

q) what is refernce string?
a) the string of memory references is called reference string.

q) how to implement FIFO page replacement algorithm?
a) way1) with each page in memory associate the time, when it came in, then remove the oldest page.
way2) make a FIFO queue of pages and remove 1 at the head.

q) what is belady's anomaly?
a) FIFO(some others are also there) suffers from belady's anomaly. here the number of page faults increase even with allocation of more frames to the process.

note:- of all the page replacement algorithms OPR has least page fault, though not practically possible.

q) what is logic of OPR?
a) remove the page that will not be used for the longest period of time.

q) what is logic of LRU?
a) here the page that has not been used for a long period of time is replaced. it is approximation to OPR.with each page its time of last use is attached.

q) implementation of LRU?
a) 1) counters:- with each page we have time-of-use field and we add to cpu a clock or counter, which is incremented for each memory reference.contents are copied to the referenced we need to search page with least count. here we write to memory, and also counter is to be maintained to prevent overflow.

2) stack:- stack of page numbers. a page when referenced is moved to the top of stack.tail pointer points to LRU page easy to implement using doubly linked list.

note:- OPT and LRU donot suffer from belady's anomaly.

q) what are stack algorithms?
a) these never exhibit belady's anomaly.these are algorithms for which the set of pages in memory for n frames is always the subset of set of pages for n+1 frames.

note:- these algorithms slow down the process execution as for each memory reference , there is a need to update the timer or stack structures.

q) what are referenece bits?
a) its a technique used to assist LRU. with each page we associate a reference bit.initially all bits are cleared to 0 by O.s when it is referenced the hardware sets it to 1. so we can know which pages have been used and which have not been.

q) what additional feature added to reference bit?
a) now with each page a byte is attached ie 8 stores history.every interval(set by hardware), the refernce bit is set to high order bit, and all other bits are moved 1 postion right. smallest number is LRU.if same value FIFO is used.

q) what is 2nd chance alghorithm?
a) here only 1 refrence bit is used. if page is to be replaced it's reference bit is examined.if 0 it is replaced.if not, then it gets a 2nd chance ie bit is cleared and arrival time is set to current time, and moves further. it can also be implemented using the circular queue,where page with 0 reference bit is searched. if all are set, it acts as FIFO.

note:- we can also enhance the 2nd chance algorithm by using modify bit also along with. now we get 3 classes here:-
case 1 : neither recently used nor modified- best page to replaced.
case 2 : not recently used but modified - not quite as good, page need to be written out on the disk before replacement
case 3 : recently used but not modified - probably will be used again soon
case 4 : recently used and modified - probably will be used again soon.

q) what are counting based page replacement algorithms?
a) here for each page , count the number of refernces made to it.
2 types are there:-
1) LFU:- page with low count is removed.
disadvantage:- a page that was used heavily in begining, but not needed now remains in memory.

2) MFU:-it argues that page with small count has just been brought in , and yet to be used.

note:- the minimum number of frames to be allocated to a process is also determined by the instruction -set enough pages must be there that the single instruction can reference.

note:- limit has been put on the levels of indirection to support virtual memory in context of minimum number of frames to be allocated to a process.

q) what is meant by equal allocation?
a) if we have m free frames and n processese, give each process m/n frames.

q) what is proportional allocation?
a) here we allocate number of frames to each process as s(i)/S *m; where S is total sum of virtual memory for each process. s(i) virtual memory for individual.

note:- we also need to consider the priority of the process while allocating frames rather than size.

q) what is global replacement and local replacement?
a) global replacement allows to select replacement frame from all frames even allocated to other processes.local replacement requires each process select replacement frame from its own set of allocated frames.

note:- higher priority process can select replacement frame from its own set or from lower proirity processes.

q) what is disadvantage of global page replacement?
a) a process cannot control its page faults.

note:- if number of frames allocated to a process fall below the minimum required by computer architecture, then process need to be swapped out completely.

q) what is thrashing?
a) its high paging activity, where process spend more time paging than execution.

note:- thrashing occurs because of over multiprogramming and global replacment.all processes are in pager device queue than ready queue.

q) how to prevent thrashing?
a) by allocating the process the number of frames it need. for this we use the locality model of process execution.

q) what is locality?
a)it is set of pages that are used together.program structure defines locality. eg:- a procedure its instructions,variables global variables form a locality.

note:- if fewer frames than its locality are assigned then the process will thrash.

Pages>> 1 2 3 4 5 6 7 8

Reblog this post [with Zemanta]
Pages>> 1 2 3 4 5 6 7 8
Here we are giving chapter wise notes of operating system based on operating system concepts by galvin

q) what is a thread?
a) it is a lightweight process, a basic unit of cpu utilization. it contains a thread id,program counter,register set and

q) what do a thread share with other threads of same process?
a) code section and data section.

note:- if a process has more than 1 thread it can do more than 1 task at a time.

q) give example of application with multiple threads?
a) web browser:- 1 thread display images ot text while other retrieves data from network.
word processor:- display graphics,read keystrokes and third one perform grammar and spelling check.

q) what are 2 kinds of threads?
a) 1) user threads and 2) kernel threads.

q) how are user thread implemented?
a) thread libraray.library gives support for thread creation, scheduling and management with nO help from kernel.

q) what is many to one model of threads?
a) there are many user-level threads but only 1 kernel green threads(a thread library) for Solaris2.

q) what is 1-1 model?
a) each user thread is mapped to a kernel a user thread creation requires a corresponding kernel thread also.
there is a restriction on the number of threads suported by system. windows nt,2000 use this model.

q) what is a many-many model?
a) many user-level threads mapped to smaller or equal number of kernel threads.

q) how many kernel threads we can have in many-many model?
a) depends either on the particular application or a particular machine.

q) in case of multithreaded program, how can a fork behave?
a) 1) IT DUPLICATES all threads of its parent.
2) it is single threaded.

q) what is thread cancellation?
a) it is the task of terminating a thread before its completion.

q) any example of thread cancellation?
a) 1) if multiple threads are searching through database and 1 returns the result,remaining are cancelled.
2) if u click stop button in web browser,it stops web page from loading any further.
the thread that is to be cancelled is called the target thread.

q) what are 2 different kinds of thread cancellation?
a) 1) asynchronuos:- one thread immidiately terminates the target thread.
2) deffered:- target thread checks itself periodically to check if it should terminate.

q) what is difficulty in cancellation?
a) in situations where resources have been allocated to the cancelled thread, or if thread is cancelled in middle of
updating data it is sharing with other threads. in case of asynchronous cancellation it may occur that o.s is not able to
reclaim all resources from the thread.

q) what is signal used for ?
a) used in unix systems to notify a process that a particular event has occured. it may be recieved synchronously or
asynchronously depending on source and reason for event being signalled.

q) tell about synchronous signal?
a) it occurs when illegal memory access is there or division by 0 takes place.synchronous are delived to same process that
performed the operation causing signal.

note:- asynchronous signal is sent to other process.

q) what are various signal handlers?
a) 1) default signal handler.
2) user-defined -do-.

note:- take example of web server. it creates a new thread when a new request comes to it.if more requests are there, then
more threads may exhaust the system solve this we have thread pool.

q) what is concept of thread pool?
a) at time of process startup, create a number of threads and place them in pool to wait for work.they finish work and then
come back to pool. if request come and no thread is available, then request has to wait.

q) what is thread specific data?
a) each thread share data of process, but sometimes it may need its own copy of data. such data is called thread specific

q) what are Pthreads?
a) it refers to a POSIX standard defining an API for thread creation and synchonization. it is specification for thread
behaviour not its implementation. o.s developer can use these specifications in any way he likes.generally libraries
implementing Pthreads are confined to the unix-based system like Solaris2.

Pages>> 1 2 3 4 5 6 7 8

Reblog this post [with Zemanta]

Pages>> 1 2 3 4 5 6 7 8
Here we are giving chapter wise notes of operating system based on operating system concepts by galvin


q) what is a process?
a) a program in execution.

q) what is difference b/w the program and process?
a) program is passive entity,such as contents of file on disk. whereas process is a active entity, with program counter
indicating next instruction to be executed, and has set of associated resources.

q) how many states can a process be in?
a) 1)new 2)running 3)waiting 4)ready 5)terminated.

q) what is process control block?
a) each process in o.s is represented by process control block. it contains all information related to process. it contains:-
1) process state
2) program counter
3) cpu registers:- state of process is saved
4) cpu scheduling information:- process priority, pointers to scheduling queues.
5) memory management information:- limit and base registers, page tables.
6) accounting information:- amount of cpu and real time used.
7) i/o status information:- list of i/o devices allocated and list of open files.

q) what is ready queue?
a) processes that are in memory and are ready and are waiting to get executed are kept on list called ready queue, generally
implemented as linked list.

q) what is there in the ready queue?
a) process control blocks.

note:- there is device queue also for each device.

q) what is queueing daigram?
a) it is used to represent process scheduling,ie rectangles for queues,circles for resources etc.

q) how many types of schedulers are there?
a) in batch system, more processes are submitted than can be executed. they are spooled to disk. now we have long-term-
scheduler that selects process from the pool and load into memory. cpu or short-term scheduler selects process from memory
to be given cpu.cpu scheduler needs to take decision fast.

q) when is long term scheduler invoked?
a) when a process leaves the system.

note:- long-term scheduler must send proper mix of i/o bound and cpu bound processes to the memory.

q) which systems donot have long term scheduler?
a) time sharing systems like unix. new process is put in memory for cpu scheduler.

q) what is medium level scheduler?
a) it brings in the concept of swapping.

q) what is context switch?
a) to switch cpu to another process, needs saving state of old process and loading saved state of new process.

q) what is system call to create a new process?
a) create-process.

note:- child processes may get different resouces from its parent process or they may share resources.

q) in unix , how to create new process?
a) each process has a pid, and new process are created by fork system call.

q) what are types of concurrent processes in the system?
a) they may be independent(do not share data) or may be cooperating(share data).

q) why process cooperation is needed?
a) 1) information sharing:- shared file.
2) computation speed up:- task broken into we need multiple processing elements.
3) modularity

q) give examples of the producer consumer problem?
a) print program produce characters to be used by printer driver.compiler produces assembly code for assembler.

note:- a synchonization is required between producer and consumer. a buffer is required in between shared by both.

q) where do we get buffer from?
a) may programmer need to code it , or provided by o.s through use of IPC facility.

q) what is IPC?
a) it is inter process communication facility. it allows processes to communicate without sharing same address is
mostly used in distributed systems. it is provided by message passing system. ex: chat program on WWW.

q) where is message passing used in computer?
a) in b/w micro kernels.

q) what operations do IPC provide?
a) at least 2 ie send(message) and recieve(message).

q) what is direct communication?
a) here we have:-
1) send(P,message):- send a message to process P.
2) recieve(Q,message):- receive a message from process Q.

q) what is indirect communication?
a) here messages are sent to and recieved from mailboxes,or ports.2 processes need to share a mailbox. it works as:-
1) send(A,message):- send a message to mailbox A.
2) recieve(A,message):- receive a message from mailbox A.

note:- mailbox can be owned by a process or by operating second case o.s allows processes to create,send and
recieve and then delete a mailbox.

q) how many ways of message passing?
a) it can be blocking and non-blocking, also called as synchronous and asynchronous. it can be:-
1) blocking send
2) non blocking send
3) blocking recieve
4) non blocking recieve

q) what is rendezvous?
a) when both send and recieve are blocking ,then there is rendezvous b/w sender and reciever.

q) what is buffering here?
a) messages exchanged b/w processes reside in temporary queue. this queue can be implemented in 3 ways:-
1) 0 capacity:- no message should wait,so sender need to get blocked till recipient recieves.
2) bounded capacity:- sender need to block if queue is full.
3) unbounded capacity:- sender never blocks.

q) what is socket?
a) it is defined as end point for communication. processes communicating over network employ is made up of IP address
concatenated with port number.
in genaral sockets use client-server architecture.

q) what are the ports that server listens to provide certain services?
a) telnet server listens to port 23
ftp server listens to port 21
http server listens to port 80.

note:- all ports below 1024 are used for standard services.

q) when a client process request for connection, what happens?
a) it is assigned port number >1024 by host computer. then connection will cosist of pair of sockets. then packets travel .
b/w client and server.

note:- processes on same host, when try to get connected to same server ,it would be given unique port number. all
connections must be unique.

q) how many types of sockets , java provide?
a) 3 types:-
1) connection oriented(TCP) :- implemented with Socket class.
2) connectionless(UDP) sockets use DatagramSocket class.
3) MulticastSocket class allows data to be sent to multiple recipients.

q) what is a thread?
a) it is also called light weight process. it consist of thread id,program counter,register set. it shares with other
threads belonging to same process code,data section and other o.s resources. traditional heavy weight process has single
thread of control. if a process has multiple threads,it can do more than 1 task at a time.

q) give an example of thread?
a) a word processor may have a thread for displaying graphics, another for reading keystrokes, third for performing
spelling and grammar check.they also play an important role in remote procedure calls(RPC).

note:- instead of process creation on web server to handle a client a thread is created to service request.

Pages>> 1 2 3 4 5 6 7 8

Reblog this post [with Zemanta]

Pages>> 1 2 3 4 5 6 7 8
Here we are giving chapter wise notes of operating system based on operating system concepts by galvin


q) on what do scheme for memory management depends?
a) hardware.

note:- memory can be viewed as the large array of bytes , each having an address.

q) what is the input queue?
a) collection of processes on the disk , waiting to be brought into the memory for execution , forms the input queue.

note:- compiler maps the symbolic address of the program into the relocatable address.loader convert it into physical address

q) different types of binding of instructions and data to memory address?
a) 1) compile time:- if we know at compile time where the program will reside in memory.if location changes, we need to
recompile it. eg:- MS-DOS .COM -format programs use this technique.
2) load time:-
3) execution time:- if process is to be moved in memory during execution time,run time binding is used.

q) what is logical address space and the physical address space?
a) address generated by cpu is logical address and address seen by memory is physical address. set of addresses is called
address space.

note:- in case of compile time and load time binding , the logical address and physical address are the same, whereas in
run time they are not.

q) what is memory management unit?
a) run time logical to physical address mapping is done by MMU.

q) what is concept of dynamic loading?
a) here the routinue is not loaded into memory until it is called.only main program is loaded.after call, routinue is loaded
and program's address table is also take advantage of this concept, programmer has to take proper care,not the
responsibilty of the O.S.

q) what is there in linking?
a) here the other modules and system libraries are included into the program image.

q) what is dynamic linking?
a) here all the system libraries or atleast the routines referenced in them are linked only when call is made at run time.

q) what is a stub?
a) it is used to support the dynamic is included for each library routine is a small piece of code,
that tells how to locate the library-routine in memory or how to load if routine not present.
** its advantage is that all the processes use only 1 copy of the routine.

q) what is concept of overlays?
a) it enables the process to be larger than the memory allocated to it. only required instructions and data are to be kept in the
memory. when other instructions are needed , they are brought in place of previous data, no longer needed.
eg:- 2 pass assembler.

note:- an overlay driver is used that reads code into also is responsibility of programmer to take advantage of
this concept.programmer only designs the overlay, o.s only do I/O here.

q) what is swapping?
a) it is process of shifting a process to backing store(swap out), and bringing a process to memory(swap in).

q) what is rollin and roll out?
a) it is variant of swapin and swap out. it is done when a higher priority process wants to come into the memory.

q) do a process swapped out , needs to be swapped in , in the same location?
a) in case of compiletime and loadtime binding---yes, but not in case of run time address binding.

q) when a process can't be swapped out?
a) if it's not idle.if a process is waiting for the I/O ,and I/O is asynchronously accessing the user memory for I/O buffers,
then it can't be swapped out.

**** in case of contigous memory allocation:-

note:- memory divided into 2 parts:-
1) o.s 2) user programs.

q) how to provide memory protection?
a) using a relocation and limit register.

note:- logical address is first compared with the limit register.

q) what is transient operating -system code?
a) o.s contains code and buffer space for devicedrivers. if device driver not commonly used,no need to keep the code and
data in memeory. such code that comes and goes as needed is called transient o.s.

q) what is MFT?
a) it is multiprogramming with fixed number of tasks. here the memory is divided into fixed sized of partitions. in 1
partition 1 process can reside at a was used by IBM OS/360, now not in use.

q) what is MVT?
a) it is multiprogramming with variable number of tasks.primarily used in batch environment.

q) how MVT is implemented?
a) OS maintains a table indicating which parts of memory are free and which are allocated.

q) what is a hole?
a) free part of memory is called a hole.

q) how dynamic storage allocation problem solved?
a) it relates to how to satisfy the request of size n from the available holes.many strategies are there:- first fit, best
fit,worst fit.

q) what is the disadvantage of these algorithms above?
a) they suffer from external fragmentation.

q) what is external fragmentation?
a) it exists when we have total free memory to accomodate a process, but it is not contiguous.

q) what is internal fragmentation?
a) when physical memory is broken into fixed sized the process may be smaller than the size of the partition.
so rest of space in partition is wasted. it is called internal fragmentation.

q) what is compaction?
a) it is one of the solution to the external the free memory is moved to one part and all pragrams and
data on other side. this can be done if address binding is done at run time. it involves a lot of cost also.

note:- other solution to external fragmentation is paging and segmantation(here program need not to be in contiguous memory)

q) what is paging?
a) it allows the physical address space of the process to be non-contiguous. earlier when processes used to be swapped out
we need to search for free space on disk(backing store) too. so, external fragmantation on disk also occured.

q) concept of paging?
a) physical memory is broken into fixed sise blocks called frames.logical memory is broken into pages of same size of frames.
backing store is also divided into the blocks of size of frames.
when a process is to be loaded , its pages must be loaded into the frames.

q) now how cpu generates address?
a) now address has 2 parts:- p and d. p is the index into the page table, from where we get the frame number, which combines
with d part to form physical address.

q) how to decide how many bits for p and d?
a) page size is defined by hardware.this size is generally the power of 2.if size of logical address space is pow(2,m) and
the page size is pow(2,n), then higher order m-n bits will tell the page number and the n lower order bits will tell the
page offset.

note:- in paging we dont have external fragmentation, but we do have internal fragmentation.

q) so, can we have small page size to reduce the internal fragmentation?
a) but then the page table will grow in size , and overhead to maintian will increase.also with smaller page size disk I/O
will not be efficient.

note:- when a process arrives in the system , its size in pages is expressed.

q) what is frame table?
a) it is a table maintained by the O.s, which has 1 entry for each frame of physical tells whether a frame is free
or not, if not, then to which process it is allocated.

note:- a system call is made,suppose for I/O, then buffer address is passed as the parameter.

note:- for each process there is a page table.

q) do paging increase context switch time?
a) yes, because dispatcher has also to define hardware page table.

note:- the page table pointer is also stored in the process control block.

q) how to implement page table?
a) way 1) use of dedicated set of registers:- dispatcher needs to load these registers , in context switch.
this method is not feasible if page table has millions of entries.
way 2) here we store the page table in the main memory and PTBR stores its context switch time is less
but here to make a access in memory , 2 memory accesses are required.

q) what is TLB(translation look-aside buffer)?
a) it is a high speed associative memory, with 2 fields tag and value.upto 60 to 1024 entries it can will contain
only few of the page table entries.

q) what if a search in TLB makes a miss?
a) then the reference to memory page table is made.the new entry is also added to the TLB. if no entry free, 1 need to be

q) what do u mean that some entries in TLB are wired down?
a) it means that they cannot be removed from there.entries for kernel code are wired down.

q) what are protection bits?
a) they are stored into the page table only and they indicate during address mapping, that the particular page is read-write
or read-only.

q) what is valid-invalid bit?
a) it is 1 more bit attached to the page table entry.valid bit indicates that this page is in process's logical address space
and thus legal, and invalid bit indicates that the page is not in the logical address space of the process.they cause trap to

note:- advantage of paging is sharing common can be done by setting entry in the page table.

q) what is reentrant code?
a) that never changes during the execution. eg:- code of a text can be executed by more processes at same time.

q) name some programs that can be shared?
a) compilers,database-systems, run-time libraraies,etc. code shared must be reentrant.

q) what is segmentation?
a) here the user view of memory is as the collection of various segments.each segment has a name and length.the address here
consists of the segment number and the offset.

note:- when a user program is compiled, the compiler itself constructs various segments.

q) who does mapping here?
a) segmenation table, each entry here has base and limit of segment. here logical address has 2 parts , segment number and
offset, offset is compared against the segment limit.

q) what are advantages of segmentation?
a)1) main, functions,procedures, global variables,stacks,arrays,tables etc are various segments.because of segment containing
entries of same type,protection can be easily provided.
2) sharng of code and data can be done. only need to change process's segment table.

note:- in case of segmentation, the care must be taken while sharing the code segment, as they refer to themselves. all
sharing processes need to have same segment number for that code segment. this problem is not there in case of read only

Pages>> 1 2 3 4 5 6 7 8

Reblog this post [with Zemanta]