computer system | computer history | computer - wikipedia | computer hardware |

COMPUTER


A computer is a digital electrical machine that can be programmed to perform arithmetic or logical operations (computation) in a predetermined order. Programs are generic collections of operations that modern computers can do. These programs allow computers to carry out a variety of tasks. A computer system is a "full" computer that comprises the hardware, operating system (primary software), and peripherals required for "full" operation. This term can also refer to a connected and cooperating set of computers, such as a computer network or a computer cluster. 


Computers are used to control a wide range of industrial and consumer products. Simple special-purpose devices such as microwave ovens and remote controls, as well as factory equipment such as industrial robots and computer-aided design, as well as general-purpose devices such as personal computers and mobile devices such as smartphones, are all included. The Internet, which connects billions of other computers and people, is powered by computers.


Computers were originally designed to be used only for calculations. Since ancient times, simple manual gadgets such as the abacus have benefited humans in completing computations. Some mechanical devices were created early in the Industrial Revolution to automate long, repetitive processes, such as guiding patterns for weavers. In the early twentieth century, more complex electrical equipment performed specialized analog calculations.


 During World War II, the first digital electronic calculating computers were constructed. In the late 1940s, the first semiconductor transistors were developed, followed by silicon-based MOSFET (MOS transistor) and monolithic integrated circuit (IC) chip technologies in the late 1950s, paving the way for the microprocessor and microcomputer revolutions in the 1970s.Since then, computer speed, power, and versatility have increased tremendously, with transistor counts increasing at a rapid rate (as anticipated by Moore's law), resulting in the Digital Revolution in the late twentieth and early twenty-first centuries.


A modern computer typically has at least one computing device, such as a central processing unit (CPU) in the form of a microprocessor, as well as some type of computer memory, such as semiconductor memory chips. A sequencing and control unit can vary the order of operations in response to stored information, while the processing element performs arithmetic and logical operations. Input devices (keyboards, mouse, joysticks, and so on), output devices (monitor displays, printers, and so on), and input/output devices that do both jobs are all examples of peripheral devices (e.g., the 2000s-era touchscreen). Peripheral devices allow information to be retrieved from an external source, as well as saving and retrieving the results of activities.



Etymology of Computers

"I haue read the truest computer of Times, and the best Arithmetician that euer breathed, and he reduceth thy dayes into a short number," writes English writer Richard Brathwait in his 1613 work The Yong Mans Gleanings, according to the Oxford English Dictionary. A human computer, or someone who performs calculations or computations, was referred to in this meaning of the phrase. Until the middle of the twentieth century, the word had the same meaning. Women were frequently hired as computers in the latter half of this time since they could be paid less than their male counterparts. Women made up the majority of human computers by 1943.


The first documented use of computer, according to the Online Etymology Dictionary, was in the 1640s, and it meant "one who calculates"; it is a "agent noun from compute (v.)." According to the Online Etymology Dictionary, the term "'calculating machine' (of any form)" was first used in 1897. The "current use" of the phrase to denote "programmable digital electronic computer" comes from "1945 under this name; theoretical from 1937, as Turing machine," according to the Online Etymology Dictionary.


History of Computers

Before the twentieth century


For thousands of years, devices have been employed to facilitate computing, most commonly employing one-to-one connection with fingers. A type of tally stick was most likely the first counting device. Calculi (clay spheres, cones, etc.) were later record-keeping devices used throughout the Fertile Crescent, and they indicated counts of objects, most likely cattle or grains, sealed in hollow unbaked clay vessels. One example is the usage of counting rods.


Originally, the abacus was employed for arithmetic. The Roman abacus evolved from instruments used as early as 2400 BC in Babylonia. Many additional types of reckoning boards or tables have been developed since then. A checkered fabric would be laid on a table in a medieval European counting house, and markers would be moved around on it according to specified regulations as an assistance to calculating money sums.


According to Derek J. de Solla Price, the Antikythera mechanism is thought to be the earliest known mechanical analog computer. [6] It was created with the purpose of calculating astronomical positions. It was discovered in 1901 in the Antikythera wreck off the coast of Antikythera, a Greek island between Kythera and Crete, and is thought to date from around 100 BC. It would take until the thirteenth century for devices comparable to the Antikythera mechanism to resurface.


For astronomical and navigational purposes, many mechanical aids to calculation and measuring were built. Ab Rayhn al-Brn developed the planisphere, a star chart, in the early 11th century. Hipparchus is frequently credited with inventing the astrolabe in the Hellenistic realm in the first or second centuries BC. The astrolabe, which was a combination of the planisphere and the dioptra, was essentially an analog computer capable of solving a variety of spherical astronomical problems. Abi Bakr of Isfahan, Persia, constructed an astrolabe with a mechanical calendar computer and gear wheels in 1235. The earliest mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing system with a gear train and gear-wheels, was invented by Ab Rayhn al-Brn.


For astronomical and navigational purposes, many mechanical aids to calculation and measuring were built. Ab Rayhn al-Brn developed the planisphere, a star chart, in the early 11th century. Hipparchus is frequently credited with inventing the astrolabe in the Hellenistic realm in the first or second centuries BC. The astrolabe, which was a combination of the planisphere and the dioptra, was essentially an analog computer capable of solving a variety of spherical astronomical problems. Abi Bakr of Isfahan, Persia, constructed an astrolabe with a mechanical calendar computer and gear wheels in 1235. Around 1000 AD, Ab Rayhn al-Brn constructed the first mechanical geared lunisolar calendar astrolabe, a fixed-wired knowledge processing system with a gear train and gear wheels.


The sector, a mathematical tool used for proportion, trigonometry, multiplication and division, as well as numerous functions such as squares and cube roots, was invented in the late 16th century and employed in gunnery, surveying, and navigation.


The planimeter was a hand-held device that used a mechanical linkage to trace across a closed figure to compute its area.


Giovanni Plana, a mathematician and engineer, invented a Perpetual Calendar machine in 1831–1835 that could anticipate the perpetual calendar for every year from AD 0 (that is, 1 BC) to AD 4000, keeping account of leap years and fluctuating day lengths with a system of pulleys and cylinders and over. Sir William Thomson, a Scottish scientist, devised a tide-predicting machine in 1872, which was extremely useful for navigation in shallow seas. It used a system of pulleys and cables to determine anticipated tide levels for a certain period at a specific site automatically.


The differential analyser, a mechanical analog computer that used wheel-and-disc mechanisms to solve differential equations by integration, employed wheel-and-disc mechanisms to do so. Sir William Thomson had already considered the possibility of building such calculators in 1876, but he had been stopped by the ball-and-disk integrators' limited output torque. The output of one integrator drove the input of the next integrator, or a graphing output, in a differential analyzer. The torque amplifier was the breakthrough that made it possible for these machines to function. Vannevar Bush and others developed mechanical differential analyzers in the 1920s.


The very first computer

The concept of a programmable computer was invented by Charles Babbage, an English mechanical engineer and polymath. He is known as the "Father of the Computer" since he invented the first mechanical computer in the early 1800s. In 1833, he recognized that a much more generic design, an Analytical Engine, was possible after working on his innovative difference engine, which was designed to aid in navigational computations. Programs and data were to be fed into the machine through punched cards, a technology that was popular at the time for controlling mechanical looms like the Jacquard. The machine would have a printer, a curve plotter, and a bell for output. 


The machine could also punch numbers onto cards, which could then be read later.The Engine had an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first Turing-complete general-purpose computer architecture.


It was a century ahead of its time when it was built. All of the pieces for his machine had to be handcrafted, which posed a significant challenge for a gadget with thousands of components. The experiment eventually came to an end when the British government decided to stop sponsoring it. Babbage's failure to build the analytical engine is primarily due to political and financial constraints, as well as his drive to develop a more powerful computer and go ahead faster than anybody else. His son, Henry Babbage, did, however, develop a simpler version of the analytical engine's computing unit (the mill) in 1888. In 1906, he successfully demonstrated its usage in computing tables.


Computers that are analog

Many scientific computing demands were handled by more complex analog computers in the early half of the twentieth century, which used a direct mechanical or electrical representation of the problem as a basis for calculation. These, on the other hand, were not programmable and lacked the variety and precision of modern digital computers. In 1872, Sir William Thomson (later Lord Kelvin) built the first modern analog computer: a tide-predicting machine. The differential analyser, a mechanical analog computer that uses wheel-and-disc mechanisms to solve differential equations via integration, was invented in 1876 by James Thomson, the eldest brother of Sir William Thomson.


The differential analyzer, constructed by H. L. Hazen and Vannevar Bush at MIT beginning in 1927, was the pinnacle of mechanical analog computing. This was based on James Thomson's mechanical integrators and H. W. Nieman's torque amplifiers. Before their obsolescence became apparent, a dozen of these gadgets were produced. The development of digital electronic computers had put an end to most analog computing machines by the 1950s, but analog computers were still used in some specialized applications such as education (slide rule) and airplanes during that time (control systems).


Computers with a digital interface

Mechatronic Computer

The US Navy had built an electromechanical analog computer small enough to be used on a submarine by 1938. This was the Torpedo Data Computer, which solved the problem of shooting a torpedo at a moving target using trigonometry. Similar devices were developed in various countries during World War II.


Zuse released the Z3, the world's first functional electromechanical programmable, fully automatic digital computer, in 1941, as a follow-up to his earlier machine. The Z3 was manufactured using 2000 relays and ran at a clock frequency of roughly 5–10 Hz, with a 22 bit word length. Data could be saved in 64 words of memory or entered via the keyboard, while program code was provided on punched film.


 In some ways, it was akin to modern machines, and it pioneered various innovations such as floating-point numbers. Using a binary system instead of the more difficult-to-implement decimal system (used in Charles Babbage's previous design) made Zuse's computers easier to build and perhaps more dependable, given the technologies available at the time.The Z3 was not a universal computer in and of itself, but it could be made to be Turing complete.


The Z4, Zuse's next computer, was the world's first commercial computer; it was finished in 1950 and delivered to the ETH Zurich after an initial delay owing to the Second World War. Zuse KG, Zuse's own company, built the computer. Zuse KG was created in 1941 as the first corporation dedicated only to the development of computers.


The Z4, Zuse's next computer, was the world's first commercial computer; it was finished in 1950 and delivered to the ETH Zurich following an initial delay caused by the Second World War. Zuse's own firm, Zuse KG, built the computer. Zuse KG was created in 1941 as the first company dedicated only to computer development.


Electronic circuits with vacuum tubes

At the same time that digital calculation superseded analog, pure electronic circuit elements quickly supplanted their mechanical and electromechanical counterparts. In the 1930s, engineer Tommy Flowers of the Post Office Research Station in London began to investigate the use of electronics for telephone exchanges. He created experimental equipment in 1934 that was put into service five years later, turning a piece of the telephone exchange network into an electronic data processing system that used thousands of vacuum tubes.


 In 1942, Iowa State University's John Vincent Atanasoff and Clifford E. Berry designed and tested the Atanasoff–Berry Computer (ABC), the first "automatic electronic digital computer."This was likewise an all-electronic design, with roughly 300 vacuum tubes and memory capacitors mounted in a mechanically revolving disk.


It combined electronics' fast speed with the capacity to be programmed for a variety of complicated tasks. It could perform 5000 operations per second, a thousand times quicker than any other machine. It also included modules for multiplying, dividing, and taking the square root of a number. The capacity of high-speed memory was restricted to 20 words (about 80 bytes). ENIAC was developed and built at the University of Pennsylvania under the guidance of John Mauchly and J. Presper Eckert from 1943 until it was fully operational at the end of 1945. The machine was massive, weighing 30 tons and consuming 200 kilowatts of electricity. It had almost 18,000 vacuum tubes, 1,500 relays, and tens of thousands of resistors, capacitors, and inductors.


Computers of the present day

Computer concept of today

Alan Turing proposed the principle of the modern computer in his seminal 1936 work On Computable Numbers. Turing designed a basic device he named the "Universal Computing Computer," which has now become known as a universal Turing machine. By executing instructions (program) stored on tape, he demonstrated that such a computer is capable of computing anything that can be computed, allowing the machine to be programmable.


 The stored program, in which all the instructions for computing are kept in memory, is the central principle of Turing's design. This work, according to Von Neumann, is responsible for the central concept of the contemporary computer. Turing machines are still studied extensively in the field of computation theory.Modern computers are claimed to be Turing-complete, which means they have algorithm execution capacity similar to a universal Turing machine, with the exception of the limits imposed by their limited memory storage.


programs saved on a computer

Early computers had pre-programmed programs. The machine had to be rewired and restructured to change its function. This changed with the introduction of the stored-program computer. A computer with a stored program includes an instruction set by default and can store a collection of instructions (a program) that details the calculation in memory. 


Alan Turing created the theoretical foundation for the stored-program computer in his 1936 work. Turing began working on an electrical stored-program digital computer at the National Physical Laboratory in 1945.The first specification for such a gadget was in his 1945 report "Proposed Electronic Calculator." In 1945, John von Neumann of the University of Pennsylvania circulated his First Draft of an EDVAC Report.


The Ferranti Mark 1, the world's first commercially produced general-purpose computer, was shortly based on the Mark 1. It was handed to the University of Manchester in February 1951 after being built by Ferranti. Between 1953 and 1957, at least seven of these latter devices were delivered, one of which was to Shell labs in Amsterdam. J. Lyons & Company, a British catering company, chose to take an active role in advancing the commercial development of computers in October 1947. In April 1951, the LEO I computer was turned on for the first time, and it performed the world's first regular routine office computer job.



Transistors for computers

Julius Edgar Lilienfeld suggested the notion of a field-effect transistor in 1925. While working at Bell Labs under William Shockley, John Bardeen and Walter Brattain created the first operational transistor, the point-contact transistor, in 1947, which was followed by Shockley's bipolar junction transistor in 1948. [50] [51] Transistors began to replace vacuum tubes in computer designs in 1955, resulting in the "second generation" of computers. Transistors offer several advantages over vacuum tubes: they are smaller and use less power, thus they produce less heat.


Junction transistors were far more reliable than vacuum tubes, and their service life was limitless. Tens of thousands of binary logic circuits could be packed into a small space in transistorized computers. Early junction transistors, on the other hand, were massive devices that were difficult to mass-produce, which limited their use to a few specialized applications.


Mohamed M. Atalla and Dawon Kahng of Bell Labs devised the metal–oxide–silicon field-effect transistor (MOSFET), sometimes known as the MOS transistor, in 1959. It was the world's first really small transistor, capable of being miniaturized and mass-produced for a variety of applications. The MOSFET enabled high-density integrated circuits due to its high scalability, reduced power consumption, and higher density than bipolar junction transistors.


It not only permitted the practical use of MOS transistors as memory cell storage elements, but it also paved the way for the development of MOS semiconductor memory, which eventually supplanted magnetic-core memory in computers. The MOSFET was the catalyst for the microcomputer revolution,[61] and the computer revolution as a whole. [62][63] The MOSFET is the most extensively used transistor in computers[64][65] and a key component of digital electronics.


ICs for computers

With the invention of the integrated circuit, the next big leap in computing power was made (IC). Geoffrey W.A. Dummer, a radar specialist for the Ministry of Defence's Royal Radar Establishment, was the first to come up with the concept of the integrated circuit. On May 7, 1952, at the Symposium on Progress in Quality Electronic Components in Washington, D.C., Dummer gave the first public description of an integrated circuit.


Complete computers on a microchip (or chip) the size of a coin are known as System on a Chips (SoCs). Integrated RAM and flash memory may or may not be available. The RAM is usually placed directly above (known as Package on Package) or below (on the opposite side of the circuit board) the SoC if it is not integrated, and the flash memory is usually placed right next to the SoC if it is not integrated. This is done to improve data transfer speeds by avoiding long distances for the data signals. Computers have come a long way since ENIAC in 1945, with current SoCs (such as the Snapdragon 865) being the size of a quarter yet hundreds of thousands of times more powerful than ENIAC, incorporating billions of transistors.


System on a Chip (SoCs) are miniature computers on a microchip (or chip). They may or may not have built-in RAM and flash storage. The RAM is usually placed directly above (known as Package on Package) or below (on the opposite side of the circuit board) the SoC if it is not integrated, and the flash memory is usually placed right next to the SoC if it is not integrated. This is done to improve data transfer speeds by eliminating the need for data signals to travel long distances. Computers have come a long way since ENIAC in 1945, with current SoCs (such as the Snapdragon 865) being the size of a coin while being hundreds of thousands of times more powerful, incorporating billions of transistors.


Computers on wheels

The original mobile computers were bulky and relied on mains power to function. An early example was the IBM 5100, which weighed 50 pounds (23 kilograms). Later portables, such as the Osborne 1 and Compaq Portable, were much lighter, but they still required a power supply. The early laptops, such as the Grid Compass, eliminated this requirement by adding batteries, and portable computers rose in popularity in the 2000s as computing resources continued to be miniaturized and portable battery life improved. By the early 2000s, the same technologies had enabled manufacturers to embed computational power into cellular mobile phones.


These smartphones and tablets run a variety of operating systems and have lately surpassed desktop computers as the most popular computing device.


System on a Chip (SoC) is used to power these (SoCs),which are entire computers on a coin-sized microchip.


Hardware for computers

All of the pieces of a computer that are physical objects are referred to as hardware. Hardware includes circuit boards, computer chips, graphics cards, sound cards, memory (RAM), motherboards, displays, power supply, cables, keyboards, printers, and "mice" input devices.


The arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (together referred to as I/O) are the four basic components of a general-purpose computer. Buses, which are generally made up of groups of wires, connect these parts. Thousands to billions of little electrical circuits can be found inside each of these parts, each of which can be turned on or off using an electronic switch. Each circuit represents a bit (binary digit) of data, therefore when the circuit is turned on, it represents a "1," and when it is turned off, it represents a "0." (in positive logic representation). The circuits are organized in logic gates so that one or more of them can control the state of another.


Input devices for computers

When raw data is fed into the computer via input devices, it is processed and transferred to output devices. Hand-operated or automated input systems are available. The CPU is in charge of most of the processing. The following are some examples of input devices:

. Digital camera with computer keyboard

. Graphics tablet for digital video

. Scanning images                                                                                                               

 Joystick

Microphone

Real-time clock Mouse Overlay keyboard

 . Trackball

 .Light pen for touchscreen


Output devices for computers


Output devices are the means through which a computer generates output. The following are some examples of output devices:


. Printer and computer monitor

. Speaker for the computer

 .Projector

 .Video card and sound card


Control unit for computers


The control unit (also known as the control system or central controller) is responsible for managing the computer's many components. It reads and interprets (decodes) program instructions, converting them into control signals that activate other computer components. To boost performance, sophisticated computers' control systems may modify the sequence in which particular instructions are executed.


The program counter, a particular memory cell (a register) that maintains track of which position in memory the next instruction is to be read from, is a critical component common to all CPUs:

In the cell indicated by the program counter, read the code for the next instruction.

.To each of the other systems, decode the numerical code for the instruction into a sequence of directives or signals.

 .In order to point to the next instruction, increment the program counter.

 .Read the data from memory cells that the instruction demands (or perhaps from an input device). Typically, the location of this needed data is contained in the instruction code.

.To an ALU or a register, supply the required data.

 .Instruct the hardware to complete the specified operation if the instruction requires an ALU or specialized hardware to complete.

 .Return the ALU's result to memory, a register, or an output device.

rewind


The following instruction's code can be found in the cell indicated by the program counter.

Decode the instruction's numerical code into a series of directives or signals for each of the other systems.

The program counter should be incremented to refer to the following instruction.

The program counter can be altered by calculations performed in the ALU because it is (conceptually) just another collection of memory cells. If you increase the program counter by 100, the next instruction will be read at a spot 100 locations down the program. Jumps are instructions that adjust the program counter and allow for loops (instructions that the computer repeats) and conditional instruction execution (both examples of control flow).


From the cell indicated by the program counter, read the code for the following instruction.

Transform the instruction's numerical code into a series of directives or signals for each of the other systems.

The program counter should be increased to point to the next instruction.

Unit for central processing (CPU)

From the cell indicated by the program counter, read the code for the following instruction.

Transform the instruction's numerical code into a series of directives or signals for each of the other systems.

The program counter should be increased to point to the next instruction.

A central processing unit is made up of the control unit, ALU, and registers (CPU). Early CPUs were made up of a lot of different parts. CPUs have been built on a single MOS integrated circuit chip termed a microprocessor since the 1970s.


Unit of arithmetic logic (ALU)

There are two types of operations that the ALU may perform: arithmetic and logic. [92] ALUs may handle a restricted set of arithmetic operations, such as addition and subtraction, or they may support multiplication, division, trigonometric functions such as sine, cosine, and square roots. Some can only work with whole numbers (integers), whereas others can represent real numbers with floating point, but with reduced precision. Any computer capable of completing only the most basic tasks, on the other hand, can be programmed to break down more complex activities into simple steps.


 As a result, any computer can be designed to perform any arithmetic operation, however it will take longer if the operation is not supported directly by the ALU.. An ALU can also compare numbers and return true or false Boolean truth values (true or false) based on whether one is greater than, equal to, or smaller than the other ("is 64 greater than 65?"). Boolean logic is used in logic operations such as AND, OR, XOR, and NOT. These can let you create complex conditional statements and process Boolean logic.


Numerous ALUs can be found in superscalar computers, allowing them to process multiple instructions at the same time. [93] ALUs that can do arithmetic on vectors and matrices are commonly found in graphics processors and computers with SIMD and MIMD characteristics.


    Memory on a computer

The memory of a computer can be thought of as a grid of cells into which numbers can be entered or read. Each cell has a unique numerical "address" and can only hold one number. "Place the number 123 in cell 1357," or "add the number in cell 1357 to the number in cell 2468 and put the answer in cell 1595," are two examples of computer commands. The information held in memory could be anything. Letters, numbers, and even computer instructions can all be easily memorized. Because the CPU does not distinguish between different forms of information, it is up to the software to provide meaning to what the RAM sees as a sequence of numbers.


Each memory cell in practically all current computers is set up to store binary integers in groups of eight bits (called a byte). Each byte (28 = 256) can represent 256 different values, ranging from 0 to 255 and 128 to +127. Several successive bytes can be utilized to store greater integers (typically, two, four or eight). Negative numbers are frequently kept in two's complement notation when they are needed. Other configurations are feasible, but they are often only found in specialist applications or historical situations. If the information can be represented numerically, a computer can store it in memory. Modern computers have memory in the billions, if not trillions, of bytes.


The CPU can read and write to RAM whenever it wants, but ROM is packed with data and software that never changes, thus it can only be read from. The computer's first start-up instructions are usually stored in ROM. When the power to the computer is switched off, the contents in RAM are destroyed, while the data in ROM is retained indefinitely.


 When a computer is turned on or reset, the ROM contains a specific program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM. All of the essential software may be kept in ROM in embedded computers, which usually lack disk drives.However, because it is often significantly slower than conventional ROM and RAM, its employment is limited to situations where high speed is not required.


There may be one or more RAM cache memories in more advanced systems, which are slower than registers but faster than main memory. In general, systems using this type of cache are designed to automatically move frequently needed data into the cache, generally without the need for programmer intervention.


I/O (input/output) on a computer

I/O refers to the method by which a computer communicates with the outside world. [95] Peripherals are devices that give input or output to a computer. Peripherals on a standard personal computer include input devices like the keyboard and mouse, as well as output devices like the monitor and printer. Input and output devices include hard disk drives, floppy disk drives, and optical disc drives.Another type of I/O is computer networking.


 I/O devices are frequently complicated computers in and of themselves, with their own processor and memory. A graphics processing unit (GPU) may have fifty or more small processors that conduct the calculations required to display three-dimensional graphics. [requires citation] Many tiny computers aid the main CPU in conducting I/O on modern desktop PCs. A flat screen panel from the year 2016 has its own computer circuitry.


Multitasking on a Computer

While a computer may appear to be executing a single large program stored in its main memory, some systems require the appearance of running multiple programs at the same time. This is accomplished by multitasking, which involves the computer rapidly switching between running each program in turn. [97] This is accomplished in part through the use of a specific signal known as an interrupt, which can tell the computer to stop executing instructions where it was and execute something else instead. The computer can return to the task it was working on previous to the interruption by remembering where it was.


If numerous programs are executing "at the same time," the interrupt generator may be generating hundreds of interrupts per second, each resulting in a program switch. Because modern computers execute instructions several orders of magnitude quicker than human perception, it may appear that multiple programs are running at once, even though only one is ever operating at any given time. Because each application is given a "slice" of time in turn, this type of multitasking is frequently referred to as "time-sharing."


Prior to the advent of low-cost computers, the primary purpose of multitasking was to allow multiple individuals to utilize the same computer. Multitasking might appear to slow down a computer that is switching between numerous applications in direct proportion to the number of programs it is running, however most programs spend the majority of their time waiting for slow input/output devices to do their jobs. If a software is waiting for a user to click the mouse or hit a key on the keyboard, it will not take a "time slice" until the action is completed. This frees up time for other applications to run, allowing multiple processes to operate at the same time without sacrificing performance.


Multiprocessing on a computer

Multiprocessing is a technique used by some computers to split their work across multiple CPUs in a multiprocessing configuration, which was formerly reserved for large and powerful machines such as supercomputers, mainframe computers, and servers. Personal and laptop computers with multiple processors and cores (many CPUs on a single integrated circuit) are now readily available, and are being increasingly employed in lower-end markets as a result.


Supercomputer architectures, in particular, are frequently highly distinctive, differing greatly from both the basic stored-program architecture and general-purpose computers. [g] Thousands of CPUs, bespoke high-speed interconnects, and specialized computing hardware are common features. Due to the huge size of program structure required to successfully employ most of the available resources at once, such designs are only useful for specific tasks. Large-scale simulation, graphics rendering, and cryptography applications, as well as other "embarrassingly concurrent" workloads, are common uses for supercomputers.


Software for computers

Software refers to computer components that don't have a physical form, such as programs, data, protocols, and so on. In contrast to the physical hardware from which the system is constructed, software is the part of a computer system that consists of encoded information or computer instructions. Computer software contains programs, libraries, and non-executable data like online documentation or digital media. It is frequently split into two categories: system software and application software. Both computer hardware and software are interdependent, and neither can be used effectively on its own. Firmware refers to software that is stored in hardware that cannot be easily modified, such as the BIOS ROM in an IBM PC compatible computer.



Computer Programming Languages

There are thousands of distinct programming languages, some of which are intended for broad usage and others which are only helpful for certain specific applications.


Programs on computers

Modern computers are distinguished from other machines by their ability to be programmed, which is their defining feature. That is to say, the computer can be given a set of instructions (a program) and it will process them. Machine code in the form of an imperative programming language is common in modern computers based on the von Neumann architecture.


 In practice, a computer program might be as simple as a few instructions or as complex as millions of instructions, as in the case of word processors and web browsers. Over several years of operation, a typical modern computer can execute billions of instructions per second (gigaflops) and seldom makes a mistake.Large computer programs with millions of instructions can take a team of programmers years to build, and they almost always contain errors due to the task's complexity.


Architecture of stored programs

This section relates to the majority of RAM-based systems.

The majority of computer instructions are straightforward: add two numbers together, move data from one area to another, send a message to an external device, and so on. These instructions are read from the computer's memory and executed in the order in which they were delivered. Specialized instructions, on the other hand, usually tell the computer to go ahead or backwards in the program and continue running from there.


"Jump" instructions are what they're called (or branches). Jump instructions can also be made conditional, allowing alternate sequences of instructions to be utilized depending on the outcome of a prior calculation or an external event. Many computers directly support subroutines by offering a form of jump that "remembers" where it jumped from, as well as another instruction to return to the instruction after that jump instruction.


A individual using a pocket calculator, on the other hand, can execute a basic arithmetic operation like adding two numbers with only a few button presses. However, adding all of the numbers from 1 to 1,000 would require thousands of button clicks and a long time, with a high probability of error. A computer, on the other hand, could do this with just a few basic instructions. In MIPS assembly language, the following example is written:


begin: addi $8, $0, 0 # set sum to 0 addi $9, $0, 1 # set first number to add = 1 loop: addi $8, $0, 0 loop: addi $8, $0, 0 loop: addi $8, $0, 0 loop: addi $8, $0, 0 loop:

add $8, $8, $9 # update sum addi $9, $9, 1 # get next number j loop # repeat the summing process finish: add $2, $8, $0 # put sum in output register slti $10, $9, 1000 # check if number is less than 1000 beq $10, $0, finish # if odd number is greater than n then exit add $8, $8, $9 # update sum beq $10, $0, finish # if odd number is greater than n then exit.

The computer will complete the repeating addition task without additional human intervention once it is told to run this software. It nearly never makes a mistake, and it can do the work in a fraction of a second on a modern PC.


Code for machines

Individual instructions are recorded as machine code in most computers, with each instruction assigned a unique number (its operation code or opcode for short). One opcode would be used to add two integers together; another would be used to multiply them, and so on. The simplest computers can execute one of a few hundred different instructions, each with its own numerical code. Because the computer's memory can hold numbers, it can also hold instruction codes.

This has the critical consequence that complete programs (which are simply lists of these instructions) can be represented as lists of numbers and processed in the same way as numeric data inside the computer. The heart of the von Neumann, or stored program, architecture is the fundamental principle of keeping programs in the computer's memory alongside the data they work on. [100][101] A computer may store some or all of its program in memory that is separate from the data it works with in some instances. The Harvard architecture is named after the Harvard Mark I computer. Some features of the Harvard architecture can be found in modern von Neumann computers, such as CPU caches.


Language for programming

Programming languages allow you to declare how you want your computer to perform your programs in a variety of ways. Programming languages, in contrast to natural languages, are supposed to be clear and concise. They are entirely written languages, and reading them aloud can be challenging. They are usually converted into machine code before being run by a compiler or an assembler, or directly at run time by an interpreter. A hybrid method of the two techniques is sometimes used to execute programs.


Languages with a low level of abstraction

Machine languages and the assembly languages that implement them (together known as low-level programming languages) are usually specific to the architecture of the central processing unit of a computer (CPU). An ARM architecture CPU (such as one found in a smartphone or handheld gaming) cannot understand the machine language of an x86 CPU found in a PC, for example. Other CPU architectures, such as the MOS Technology 6502 and 6510, as well as the Zilog Z80, were produced and used extensively in the past.


Languages of a high level

Writing large programs in assembly language is sometimes challenging and error prone, despite being far easier than in machine language. As a result, the majority of practical programs are written in higher-level programming languages that can more easily convey the programmer's needs (and thereby help reduce programmer error). A compiler is a computer program that converts high-level languages into machine code (or assembly language and then machine language).


Design of the program

Small program design is generally straightforward, and it entails problem analysis, input collecting, use of programming elements within languages, designing or employing established methods and algorithms, data for output devices, and, if relevant, problem solutions. Subprograms, modules, explicit documentation, and new paradigms like object-oriented programming are all introduced as issues become larger and more complicated.


Formal software techniques are required for large programs with thousands of lines of code or more. The effort of creating huge software systems is quite intellectually demanding. Software engineering is an academic and professional subject that focuses specifically on this difficulty of producing software with acceptable high reliability within a known timetable and budget.


Bugs in Computers

"Bugs" refer to errors in computer programming. They could be harmless and have no influence on the program's functionality, or they could have relatively minor consequences. They may, however, cause the software or the entire system to "hang," becoming unresponsive to input such as mouse clicks or keystrokes, or to entirely fail or crash. An exploit, or code designed to take advantage of a bug and interrupt a computer's correct execution, can sometimes be used for nefarious purposes by an unscrupulous user. Bugs aren't always the computer's responsibility.


Because computers only carry out the instructions they are given, defects are almost always the product of a programming error or a design flaw. After a dead moth was discovered shorting a relay in the Harvard Mark II computer in September 1947, Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited with coining the term "bugs" in computing.


Internet-based networking

Since the 1950s, computers have been used to coordinate information between numerous sites. The SAGE system, developed by the US military, was the first large-scale example of such a system, and it paved the way for a variety of special-purpose commercial systems like Sabre. In the 1970s, computer engineers at research institutions across the United States began to use telecommunications technology to connect their computers. ARPA (now DARPA) supported the project, and the computer network that resulted was known as the ARPANET. The technologies that allowed for the creation of the Arpanet spread and evolved.


Computers that are not typical

A computer does not have to be electronic, nor does it need to have a CPU, RAM, or a hard disk. The contemporary definition of a computer is literally: "A device that computes, specifically a programmable [typically] electronic machine that executes high-speed mathematical or logical operations or that assembles, saves, correlates, or otherwise processes information." A computer is any device that processes data, especially if the processing is done for a specific purpose.


Future

Many intriguing new types of technology, such as optical computers, DNA computers, neural computers, and quantum computers, are being actively researched for use in computers. Most computers are ubiquitous in that they can calculate any programmable function and are only limited by their memory and processing speed. However, different computer architectures can perform considerably differently for certain challenges; for example, quantum computers have the potential to swiftly defeat several existing encryption techniques (through quantum factoring).


paradigms of computer architecture

There are a variety of computer architectures to choose from:


Chemical computer vs. quantum computer Vector processor vs. scalar processor Computers with non-uniform memory access (NUMA). Stack machine vs. Register machine

The architecture of Harvard vs. the architecture of von Neumann

cellular structure

A quantum computer, out of all of these abstract machines, has the most potential to revolutionize computing. Logic gates are a standard concept that may be used to represent most of the digital and analog paradigms mentioned above. Computers are more versatile than calculators due to their capacity to store and execute lists of instructions called programs. The Church–Turing thesis states that every computer with a minimum capability (being Turing-complete) is capable of completing the same tasks as any other computer. As a result, given enough time and storage capacity, any sort of computer (netbook, supercomputer, cellular automaton, etc.) may accomplish the same computational tasks.


AI stands for artificial intelligence

Without concern for efficiency, alternate answers, feasible shortcuts, or probable code faults, a computer will solve issues exactly how it is programmed to do so. Artificial intelligence and machine learning are two growing fields that deal with computer programs that learn and adapt. Rule-based systems and pattern recognition systems are the two main types of artificial intelligence-based goods. Rule-based systems are costly to design because they attempt to represent the rules employed by human experts. Pattern-based systems create conclusions based on facts regarding a situation. Voice recognition, typeface recognition, translation, and the developing field of on-line marketing are all examples of pattern-based systems.


Post a Comment

Don't share any link

Previous Post Next Post