IGNOU Mcs-012 June-2021

Question 1:

a. Perform the following computations using signed 1’s complement notation of length 8 bits. Also indicate overflow, if any :

i. – 76 – 52
ii. + 79 + 49
iii. + 79 – 86

Answer:

i. -76 in signed 1’s complement notation of length 8 bits is 10001100. -52 in signed 1’s complement notation of length 8 bits is 11010100. Performing the subtraction, 10001100 – 11010100 = -11011000 (overflow)

ii. +79 in signed 1’s complement notation of length 8 bits is 01001111. +49 in signed 1’s complement notation of length 8 bits is 00110001. Performing the addition, 01001111 + 00110001 = 01111000 (no overflow)

iii. +79 in signed 1’s complement notation of length 8 bits is 01001111. -86 in signed 1’s complement notation of length 8 bits is 10101110. Performing the subtraction, 01001111 – 10101110 = -11111101 (overflow)

b. Design a full-adder circuit using K-map.

Answer:

A full-adder circuit is a digital circuit that can perform the addition of three binary digits: A, B, and Cin (carry-in). The full-adder circuit outputs the sum (S) and carry-out (Cout) of the addition.

A K-map (Karnaugh map) is a graphical representation of a Boolean function that can be used to simplify Boolean expressions and design logic circuits.

Here is an example of a K-map for a full-adder circuit:

Cout = (A’B’Cin) + (AB’Cin) + (A’BCin) + (ABCin) S = (A’B’Cin’) + (AB’Cin’) + (A’BCin’) + (ABCin’)

The K-map for the Cout function is:

Cin’01
A’B’01
AB’11
A’B11
AB11

The K-map for the S function is:

Cin’01
A’B’10
AB’10
A’B10
AB01

From the K-map, we can see that the Boolean expression for Cout is (A’B’Cin) + (AB’Cin) + (A’BCin) + (ABCin) and the Boolean expression for S is (A’B’Cin’) + (AB’Cin’) + (A’BCin’) + (ABCin’).

These Boolean expressions can be implemented using AND, OR, and NOT gates to design a full-adder circuit.

c. Explain the two-way set associative cache mapping scheme with the help of an example.

Answer: Two-way set associative cache mapping is a method of mapping memory addresses to cache locations in cache memory. It is an improvement over direct mapping, where each memory block can be mapped to only one specific cache location.

In a two-way set associative cache, each memory block can be mapped to one of two possible cache locations. This is achieved by partitioning the cache memory into “sets”, each of which contains two cache locations.

For example, consider a cache memory with a total of 8 cache locations, and a main memory with 32 memory blocks. In this case, we can partition the cache memory into 4 sets, each containing 2 cache locations. This means that each set can hold 2 different memory blocks.

When a memory block is accessed, the cache controller first calculates the set index for the memory block by taking the lower-order bits of the memory address. In this example, since the cache has 4 sets, the lower-order 2 bits of the memory address are used to calculate the set index.

Once the set index is calculated, the cache controller checks the two cache locations in the corresponding set to see if the memory block is already present in one of them. If it is, the memory block is said to have a “cache hit”, and the data is retrieved from the cache. If the memory block is not found in the cache, it is said to have a “cache miss”, and the data is retrieved from the main memory and brought into the cache.

When a cache miss occurs, the cache controller chooses one of the two cache locations in the set to replace with the new memory block. This is typically done using a replacement algorithm such as the Least Recently Used (LRU) algorithm, which replaces the cache location that has not been accessed for the longest time.

In summary, two-way set associative cache mapping is a method of mapping memory blocks to cache locations where each memory block can be mapped to one of two possible cache locations. It improves cache performance by reducing the chances of conflicts between memory blocks.

d. What is DMA? Why is it useful? Draw theblock diagram of a DMA interface.

Answer: DMA stands for Direct Memory Access. It is a feature of computer systems that allows a device to access memory directly, bypassing the CPU. This is useful because it allows devices to transfer data to or from memory without the need for the CPU to be involved in every data transfer. This can significantly increase the data transfer rate and reduce the load on the CPU.

A DMA interface consists of several components:

  • DMA controller: The DMA controller is the main component that manages the DMA transfers. It controls the flow of data between memory and the device, and it also communicates with the CPU to request and release the bus.
  • Memory: The memory is where the data is stored and retrieved during DMA transfers.
  • Device: The device is the peripheral that initiates the DMA transfer. It can be a disk drive, a network card, or any other device that needs to transfer data to or from memory.
  • Bus: The bus connects the DMA controller, memory, and device. It allows the DMA controller to communicate with the device and to access memory.

Here is a block diagram of a DMA interface:

[DMA controller] <–> [Bus] <–> [Memory] | [Device] <—-|

In the diagram, the DMA controller communicates with the device and memory over the bus. The device initiates the DMA transfer, and the DMA controller manages the transfer of data between memory and the device.

It is important to note that there are two types of DMA: Memory-to-memory DMA and memory-to-peripheral DMA, which are used depending on the system design.

e. Instructions of machine are such that they have two register operands. However, to load a register a special instruction has been designed which either contains the operand value or address of the operand. List and explain four addressing modes for this machine.

Answer:

  1. Immediate addressing mode: In this mode, the operand value is directly specified in the instruction. This allows the instruction to load a register with a constant value without the need to access memory.
  2. Direct addressing mode: In this mode, the instruction contains the memory address of the operand. The instruction loads the register with the value stored at the specified memory address.
  3. Register addressing mode: In this mode, the instruction contains the name of a register that holds the operand value. The instruction loads the register with the value stored in the specified register.
  4. Indirect addressing mode: In this mode, the instruction contains the memory address of another memory location that holds the memory address of the operand. The instruction loads the register with the value stored at the memory address specified in the second memory location.

It is important to note that the addressing modes available in a machine depend on the instruction set architecture (ISA) of the machine. Some architectures may have additional addressing modes or may use different names for the same addressing mode.

f. What is the role of control memory in a micro-programmed control unit ? Explain the organisation of control memory with the help of a diagram. What is a horizontal micro-instruction ? Explain.

Answer: In a micro-programmed control unit, the control memory stores a sequence of micro-instructions that dictate the actions of the control unit. These micro-instructions are executed sequentially by the control unit and specify the operations to be performed on the data and the flow of control in the CPU.

The control memory is organized as a series of memory locations, each of which holds one micro-instruction. The control unit fetches the micro-instruction from the control memory and decodes it to determine the actions to be taken.

The organization of control memory can be represented as follows:

Control Memory |Location 0|Location 1|Location 2|…|Location n|

Each location in the control memory holds a micro-instruction, which is a binary word that contains the control signals for the CPU.

A horizontal micro-instruction is a micro-instruction where the control signals are encoded horizontally across the bits of the instruction. This is in contrast to a vertical micro-instruction where the control signals are encoded vertically in different fields or groups of bits.

A horizontal micro-instruction is organized as a bit-vector, where each bit corresponds to a specific control signal in the CPU. This organization allows for more compact representation of micro-instructions and is generally easier to implement in hardware.

An example of a horizontal micro-instruction format could be: [Opcode | ALU control signal | Register Select | Memory control signal | Next address]

Where each field of the instruction specifies a specific control signal in the CPU. Opcode field specifies the operation to be performed, ALU control signal field specifies the operation to be performed on ALU, Register Select field specifies the register to be selected, Memory control signal field specifies the memory operation and Next address field specifies the next micro-instruction address to be executed.

In summary, the control memory in a micro-programmed control unit is a memory that stores a sequence of micro-instructions that dictate the actions of the control unit. The micro-instructions are executed sequentially by the control unit and specify the operations to be performed on the data and the flow of control in the CPU. A horizontal micro-instruction is a micro-instruction where the control signals are encoded horizontally across the bits of the instruction. This format allows for a more efficient use of memory space and reduces the complexity of decoding the micro-instructions. It is a compact and easy to implement way of specifying the control signals in the CPU.

g. Write a program using 8086 assembly language that moves content of byte memory location X1 and X2 to AL and BL registers. The program then finds the
larger value of AL or BL register and stores it in DL register.

Answer: Here is an example program written in 8086 assembly language that performs the actions described:

MOV AL, [X1]        ; move content of memory location X1 to AL register
MOV BL, [X2]        ; move content of memory location X2 to BL register
CMP AL, BL          ; compare the values in AL and BL
JA GREATER          ; if AL is greater than BL, jump to GREATER
MOV DL, BL          ; move content of BL to DL
JMP DONE            ; jump to DONE
GREATER:
MOV DL, AL          ; move content of AL to DL
DONE:

This program uses the “MOV” instruction to move the contents of memory locations X1 and X2 to the AL and BL registers, respectively. It then uses the “CMP” instruction to compare the values in the AL and BL registers, and the “JA” instruction to jump to a specific label if the value in AL is greater than the value in BL. If the value in AL is greater than BL, the program moves the content of AL to DL. If not, the program moves content of BL to DL.

h. Assume the following values in the registers :
Instruction Pointer (IP) contains (A521)h
Stack Pointer (SP) contains (OOFF)h
Code Segment (CS) contains (OFFF)h
Stack Segment (SS) contains (OOOF)h
Find the following using the above information :
i. Physical address of top of stack
ii. Physical address of instruction

Answer:

i. To find the physical address of the top of stack, we need to add the value in the Stack Pointer (SP) to the value in the Stack Segment (SS) register. In this case, (OOFF)h + (OOOF)h = (FFFF)h, so the physical address of the top of stack would be (FFFF)h.

ii. To find the physical address of the instruction, we need to add the value in the Instruction Pointer (IP) to the value in the Code Segment (CS) register. In this case, (A521)h + (OFFF)h = (A520)h, so the physical address of the instruction would be (A520)h.

Question 2:

a. Draw logic diagram to implement AND, OR and NOT operations using NAND gate(s).

Answer:

b. Explain the following in the context of floating point number representation with the help of an example :
i. Normalised mantissa
ii. Biased exponent

Answer:

In the context of floating point number representation, a floating point number is represented using three fields: the sign bit, the exponent, and the mantissa. The mantissa represents the significant digits of the number, and the exponent represents the power of the base (usually 2) that the mantissa is multiplied by to obtain the final value of the number.

i. Normalized mantissa: A normalized mantissa is a mantissa where the leading digit is always non-zero and to the left of the decimal point. For example, the number 0.0123 is represented in normalized form as 1.23 * 10^-2, where 1.23 is the normalized mantissa and -2 is the exponent. In this case, the leading digit is 1, which is non-zero and to the left of the decimal point.

ii. Biased exponent: A biased exponent is an exponent that has been adjusted by a fixed value (called the bias) to allow for the representation of both positive and negative exponents in the same range of values. For example, if the bias is 3, then an exponent value of -2 would be represented as 1 (3 – 2 = 1) in the exponent field. The example of a biased exponent is used in the IEEE 754 standard, where the bias for a single-precision number is 127. Therefore, an exponent value of -2 would be represented as 125 in the exponent field.

For example, let’s consider the floating point number 0.0123. In this case the mantissa is 1.23, which is normalized. The exponent is -2, which is bias adjusted by adding the bias value of 127, so the biased exponent is 125.

In summary, normalizing the mantissa ensures that the leading digit is non-zero and to the left of the decimal point, while a biased exponent allows for the representation of both positive and negative exponents in the same range of values by adjusting the exponent with a fixed bias value.

c. How many RAM chips of size 512 K X 1 bit are required to build 1 MB memory ?

Answer: To find out how many RAM chips of size 512 K X 1 bit are required to build 1 MB memory, you can use the following formula:

Number of RAM chips = (Total memory size in bytes) / (Size of each RAM chip in bytes)

A bit is the basic unit of information in computing and telecommunications, typically represented by a 0 or 1. 1 byte is equal to 8 bits. 1 MB is equal to 10^6 bytes.

So, to find the number of RAM chips, you need to first convert the size of the RAM chip to bytes:

512 K X 1 bit = (512 X 1024) X 1 bit = 524288 bits

524288 bit = 524288/8 bytes = 65536 bytes

Now, you can use this value to calculate the number of RAM chips required to build 1 MB memory:

Number of RAM chips = (1 X 106 bytes) / (65536 bytes) = 15.2587890625

As you need a whole number of chips, you will need 16 RAM chips of size 512 K X 1 bit to build 1 MB memory.

d. What is Programmed Input/Output ? Explain with the help of a diagram. Explain the difference between Programmed I/O and Interrupt driven I/O.

Answer: Programmed Input/Output (PIO) is a method of performing input and output operations in which the CPU controls the transfer of data between memory and I/O devices. In PIO, the CPU reads or writes data to a specific memory location, and the device responds by reading or writing the data from that location.

The following diagram illustrates the flow of data in PIO:

[CPU] <—> [Memory] <—> [I/O Device]

In this diagram, the CPU reads or writes data to a specific memory location, and the I/O device reads or writes data from that location.

Programmed I/OInterrupt Initiated I/O
Data transfer is initiated by the means of instructions stored in the computer program. Whenever there is a request for I/O transfer the instructions are executed from the program.The I/O transfer is initiated by the interrupt command issued to the CPU.
The CPU stays in the loop to know if the device is ready for transfer and has to continuously monitor the peripheral device.There is no need for the CPU to stay in the loop as the interrupt command interrupts the CPU when the device is ready for data transfer.
This leads to the wastage of CPU cycles as CPU remains busy needlessly and thus the efficiency of system gets reduced.The CPU cycles are not wasted as CPU continues with other work during this time and hence this method is more efficient.
CPU cannot do any work until the transfer is complete as it has to stay in the loop to continuously monitor the peripheral device.CPU can do any other work until it is interrupted by the command indicating the readiness of device for data transfer
Its module is treated as a slow module.Its module is faster than programmed I/O module.
It is quite easy to program and understand.It can be tricky and complicated to understand if one uses low level language.
The performance of the system is severely degraded.The performance of the system is enhanced to some extent.

e. What is Latency time in the hard disk ?

Answer: Latency time in a hard disk refers to the time it takes for the disk’s read/write head to move to the correct location on the disk platter to access the requested data. It is the time required for the read/write head to move from its current position to the desired track on the platter.

It is important to note that latency time is not the same as access time, which is the total time it takes to access data on a disk, including the time for the disk’s read/write head to find the correct location and the time for the disk to rotate to the correct position.

Latency time is measured in milliseconds (ms) and it is a key factor that affects the performance of a hard disk. A disk with a lower latency time will have faster access times, and therefore, better performance.

There are two types of latency time in hard disk:

  • Rotational Latency: It is the time it takes for the disk to rotate and bring the requested data under the read/write head. This time is determined by the rotation speed of the disk and the physical location of the data on the disk.
  • Seek Latency: It is the time it takes for the read/write head to move from its current position to the desired track on the platter. This time is determined by the distance between the current position of the head and the desired track.

Latency time can be improved by using faster disk rotation speeds, or by using disk technologies such as solid-state drives (SSDs) that have no moving parts and thus have much lower latency times compared to traditional hard disk drives (HDDs).

Question 3:

a. Explain the steps required to fetch an instruction from a memory location to instruction register with the help of micro-operations.

Answer:

The steps required to fetch an instruction from a memory location to the instruction register can be broken down into the following micro-operations:

  1. MAR (Memory Address Register) <- PC (Program Counter): The value of the program counter (PC) is loaded into the memory address register (MAR). The MAR holds the memory address of the instruction that needs to be fetched.
  2. Memory Read: The instruction is read from the memory location specified by the MAR and stored in the memory buffer register (MBR).
  3. IR (Instruction Register) <- MBR (Memory Buffer Register): The instruction that was read from the memory location is loaded into the instruction register (IR). The IR holds the instruction that is currently being executed by the CPU.
  4. PC <- PC + 1: The program counter is incremented by 1 to point to the next instruction in the memory.
  5. Decode and Execute: The instruction in the instruction register is decoded and executed by the CPU.

The above micro-operations are executed in the fetch-decode-execute cycle of the CPU, which is the basic operation of the CPU. The cycle begins with the fetching of an instruction from memory, followed by decoding the instruction and executing it.

In summary, the steps of fetching an instruction from a memory location to instruction register, are:

  1. Loading the address of the instruction from the program counter into the memory address register
  2. Reading the instruction from the memory location specified by the memory address register into the memory buffer register
  3. Loading the instruction from the memory buffer register into the instruction register
  4. incrementing the program counter by 1
  5. Decoding and Executing the instruction.

b. What will be the length of various fields of an instruction considering the following ?
i. 64 possible operations
ii. 8 addressing modes
iii. Memory size of 4 KB (byte addressing is used)
iv. It has 32 registers
v. Each instruction has one register and one memory operand
Make suitable assumptions.

Answer:

Assuming each instruction is represented as a single word in memory, we can calculate the length of the various fields in the instruction as follows:

  1. Operation field: With 64 possible operations, we would need 6 bits to represent all possible operations.
  2. Addressing mode field: With 8 possible addressing modes, we would need 3 bits to represent all possible modes.
  3. Register field: With 32 registers, we would need 5 bits to represent all possible registers.
  4. Memory operand field: With a memory size of 4 KB and byte addressing, we would need 12 bits to represent all possible memory addresses.

So, the total length of the instruction would be 6 bits (operation) + 3 bits (addressing mode) + 5 bits (register) + 12 bits (memory operand) = 26 bits.

Note: This is a theoretical answer and the instruction format may vary depending on the specific architecture.

c. Explain the concept of NEAR and FAR procedural calls in 8086 microprocessor with the help of one example each.

Answer: In the 8086 microprocessor, there are two types of procedural calls: NEAR and FAR. A NEAR procedural call is a call to a procedure or a function that is located within the same segment as the calling code. The return address for a NEAR call is stored on the stack using 16-bit offset relative to the beginning of the code segment. This means that the maximum size of a NEAR procedure is 64KB.

Example:

mov ax, 3
call NEAR_PROC ; Call a NEAR procedure
; Code continues here after the procedure returns

A FAR procedural call is a call to a procedure or a function that is located in a different segment than the calling code. The return address for a FAR call is stored on the stack using a 32-bit pointer, which includes both the offset and the segment address of the procedure. This means that the maximum size of a FAR procedure is 1MB.

Example:

Copy codemov ax, 3
call FAR_PROC ; Call a FAR procedure
; Code continues here after the procedure returns

In general, NEAR calls are faster and more efficient than FAR calls because they require less memory and processing power. However, if the procedure is located in a different segment than the calling code, a FAR call is required.

d. Explain the use of INT 21h in 8086 microprocessor for reading a single character from the keyboard with the help of an example.

Answer: In the 8086 microprocessor, the INT 21h instruction is used to perform various input/output (I/O) operations, including reading a single character from the keyboard.

When the INT 21h instruction is executed, the microprocessor switches to the Interrupt Vector Table (IVT) and transfers control to the Interrupt Service Routine (ISR) associated with the specified function. In this case, the specified function is to read a single character from the keyboard.

Here is an example of how to use the INT 21h instruction to read a single character from the keyboard:

Copy codemov ah, 0  ; function 0 (read character from keyboard)
int 21h    ; call interrupt 21h
mov dl, al ; move the character read into DL register

In the above example, the first instruction sets the value of the AH register to 0, which indicates that we want to use function 0 (read character from keyboard) of the INT 21h instruction. The second instruction calls the INT 21h interrupt, which transfers control to the ISR associated with function 0. The ISR reads a single character from the keyboard and stores it in the AL register. The third instruction moves the character read from the AL register to the DL register, which can be used to hold the character for further processing.

It’s worth noting that the above example is reading a character from the keyboard in blocking mode, which means that the program execution will be halted until a character is entered by the user. There are other function codes available in INT 21h to read input in non-blocking mode, or to check the status of the keyboard buffer before reading a key.

Question 4:

a. Draw and explain the truth table and logic diagram of a 3-bit synchronous counter.

Answer: A 3-bit synchronous counter is a digital circuit that counts through a sequence of binary numbers, typically in a cyclic manner. The number of bits in the counter determines the maximum count value, in this case, it’s 3-bits so the maximum count value is 2^3-1= 7.

The truth table of a 3-bit synchronous counter will have 4 input columns (for the 3-bit count value and the clock signal) and 4 output columns (for the next 3-bit count value and the output of the 3-bit counter).

ClockCount[2]Count[1]Count[0]Next Count[2]Next Count[1]Next Count[0]Output
00000000
10000011
10010102
10101004
11000000
10000011

The logic diagram of a 3-bit synchronous counter can be constructed using J-K Flip-Flops (JK-FFs) or T Flip-Flops (T-FFs) as the basic building blocks. The J and K inputs of each flip-flop are connected to a logical “1” value, while the clock input is connected to a common clock signal. The output of each flip-flop is connected to the input of the next flip-flop in the sequence, and the output of the last flip-flop is connected to the input of the first flip-flop to form a ring counter.

In summary, a 3-bit synchronous counter is a digital circuit that counts in a cyclic manner from 0 to 7 with a clock signal as its input, the output is a 3-bit binary number. The circuit is made up of J-K or T Flip-Flops which are connected in a ring counter configuration, the output of each flip-flop is connected to the input of the next flip-flop in the sequence, and the output of the last flip-flop is connected to the input of the first flip-flop to form a ring counter.

b. Explain the von Neumann architecture with the help of a diagram.

Answer: The von Neumann architecture is a computer architecture design principle proposed by mathematician and computer scientist John von Neumann in 1945. The architecture is based on the idea of a single memory space that can store both instructions and data, and a single CPU that can execute instructions and access data.

The diagram below illustrates the basic components of the von Neumann architecture:

Central Proccessing Units
Central Proccessing Units
Control Units
Control Units
Arithmetic / Logical Units
Arithmetic / Logical…
Memory Units
Memory Units
Input Device
Input Device
Output Device
Output Device
Text is not SVG – cannot display
  1. Memory: A single memory space that can store both instructions and data.
  2. CPU (Central Processing Unit): The heart of the computer, it can execute instructions and access data in memory. It typically has several registers for holding data and instructions, and an arithmetic logic unit (ALU) for performing calculations.
  3. Input/Output (I/O) devices: These devices allow the computer to communicate with the external world, such as a keyboard, mouse, display, and storage devices.
  4. Bus: A set of connections that allow the different components of the computer to communicate with each other.

In the von Neumann architecture, the CPU fetches instructions from memory, one at a time, and executes them. When an instruction requires data, the CPU retrieves it from memory and stores it in a register. After the instruction is executed, the results are also stored in memory or registers. This process continues until the program terminates.

This architecture is widely used in most of the computers today. Its main advantage is the flexibility it provides. Programs and data can be stored in the same memory space and accessed by the CPU as needed. However, this architecture also has a drawback, which is the Von Neumann bottleneck, which refers to the limited bandwidth between the CPU and memory, which can result in slower performance.

c. What is an Input/Output processor ? How is it different from DMA ?

Answer: An Input/Output (I/O) Processor, also known as an I/O controller or I/O coprocessor, is a specialized processor or chip that manages the flow of data between the computer’s main CPU and its input/output devices. The main function of an I/O processor is to handle the low-level details of communicating with I/O devices, such as reading and writing data, managing device status, and handling errors.

The main difference between an I/O processor and Direct Memory Access (DMA) is the way they handle the transfer of data between the CPU and I/O devices. An I/O processor uses a programmed I/O approach, where the CPU sends commands to the I/O processor to initiate data transfers, and the I/O processor performs the transfer. In contrast, DMA allows I/O devices to directly access the system memory without involving the CPU, which can improve the system’s performance.

In summary, an I/O Processor manages the communication between the CPU and I/O devices with the help of programmed I/O, where the CPU sends commands to initiate data transfers. DMA, on the other hand, allows I/O devices to directly access the system memory without involving the CPU, which improves the system’s performance.

d. Differentiate between the following :
i. SRAM and DRAM

ParameterSRAMDRAM
Full FormSRAM stands for Static Random Access Memory.DRAM stands for Dynamic Random Access Memory.
ComponentSRAM stores information with the help of transistors.DRAM stores data using capacitors.
Need to RefreshIn SRAM, capacitors are not used which means refresh is not needed.In DRAM, contents of a capacitor need to be refreshed periodically.
SpeedSRAM provides faster speed of data read/write.DRAM provides slower speed of data read/write.
Power ConsumptionSRAM consumes more power.DRAM consumes less power.
Data LifeSRAM has long data life.DRAM has short data life.
CostSRAM are expensive.DRAM are less expensive.
DensitySRAM is a low density device.DRAM is a high density device.
UsageSRAMs are used as cache memory in computer and other computing devices.DRAMs are used as main memory in computer systems.


ii. ROM and Flash Memory

Answer: ROM (Read-Only Memory) and flash memory are types of non-volatile memory, which means they retain their stored data even when the power is turned off. However, they have some key differences:

  • ROM is a type of memory that can only be read and not written to. It is typically used to store the BIOS (basic input/output system) and firmware of a device, such as the firmware of a computer’s BIOS or the firmware of a router. Once data is stored in ROM, it cannot be modified or deleted, making it a permanent storage solution.
  • Flash memory is a type of memory that can be both read and written to. It is used in a wide range of electronic devices, such as USB drives, digital cameras, and smartphones. Unlike ROM, flash memory can be reprogrammed and updated. Data is stored in flash memory cells, which can be erased and reprogrammed in blocks, allowing for more efficient and flexible storage.
  • ROM is generally more expensive and slower than flash memory. Flash memory is less expensive and faster than ROM.

In summary, ROM is a type of memory that can only be read and not written to, it is typically used to store the BIOS and firmware of a device and it is permanent storage. Flash memory, on the other hand, can be both read and written to and is used in a wide range of electronic devices, it is less expensive and faster than ROM, and it is a flexible storage solution.

Question 5:
a. What is the use of stack in subroutine CALL instruction ? Explain using an example.

Answer: In a computer program, a stack is a data structure used to store information about subroutines, including the memory addresses where the program should return after the subroutine has completed. The CALL instruction is used to initiate a subroutine, and it pushes the current address (i.e., the address of the instruction immediately following the CALL instruction) onto the stack. This allows the program to later return to that address using the RET (return) instruction, which pops the top address off of the stack and jumps to it.

For example, consider the following simple program:

Copy codemain:
    call subroutine
    ; program continues execution here after subroutine completes

subroutine:
    ; code for subroutine
    ret

When the program reaches the CALL instruction in the “main” routine, the current address (the address of the instruction immediately following the CALL instruction) is pushed onto the stack. The program then jumps to the “subroutine” routine. When the subroutine is complete, it reaches the RET instruction, the top address is popped off of the stack and the program jumps back to the address that was pushed on the stack. The program continues execution at the instruction immediately following the CALL instruction in the “main” routine.

b. Why is RAID used in computers ? What is RAID Level 0 ?

Answer: RAID (Redundant Array of Independent Disks) is used in computers to provide increased data reliability and performance. RAID can be implemented in a number of different ways, called “RAID levels,” each of which provides a different balance of data reliability and performance.

RAID Level 0, also known as “striping,” provides increased performance by splitting data across multiple disks, but does not provide any data redundancy. This means that if one disk fails, all data on the array is lost. Therefore, RAID 0 is generally not recommended for use in critical systems, as it provides no protection against data loss.


c. Explain the following assembly language instructions with the help of an example
each :
i. MUL :-> MUL (Multiply) instruction is used to multiply two values and store the result in a register or memory location. For example, the instruction “MUL EAX, EBX” would multiply the values in the EAX and EBX registers, and store the result in the EAX register.


ii. ADD :-> ADD (Add) instruction is used to add two values and store the result in a register or memory location. For example, the instruction “ADD EAX, EBX” would add the values in the EAX and EBX registers, and store the result in the EAX register.


iii. TEST :-> TEST instruction is used to perform a bit-wise AND operation between two values and set the flags in the CPU based on the result. For example, the instruction “TEST EAX, EBX” would perform a bit-wise AND operation between the values in the EAX and EBX registers, and set the flags in the CPU to indicate the result.


iv. SHR :-> SHR (Shift Right) instruction is used to shift the bits of a value to the right by a specified number of positions. For example, the instruction “SHR EAX, 2” would shift the bits in the EAX register to the right by 2 positions, effectively dividing the value in EAX by 4.

d. Explain the use of CX register in implementing looping in 8086 assembly language.

Answer: In the 8086 assembly language, the CX register is commonly used as a counter in looping constructs, such as the “LOOP” instruction.

The CX register is typically initialized with the number of iterations that the loop should perform, and then decremented by one on each iteration of the loop. When the CX register reaches zero, the loop will exit.

For example, consider the following code snippet that uses the CX register to perform a loop that runs 10 times:

Copy codeMOV CX, 10    ; Initialize CX with the number of iterations

Label:
; Code to be executed in the loop

LOOP Label    ; Decrement CX by 1, and jump to Label if CX is not 0

In this example, the instruction “MOV CX, 10” initializes the CX register with the value 10, which represents the number of iterations the loop should perform. The instruction “LOOP Label” decrements the CX register by 1, and then checks if the value in CX is 0. If it is not 0, the instruction jumps to the label “Label” and the loop continues. If the value in CX is 0, the instruction falls through and the loop exits.

Share your love