computer | History Parts Networking, Operating Systems
What is a computer
information processing, storing and displaying apparatus known as a computer.
The term "computer" used to refer to a human who performed calculations, but it is now nearly always used to describe automated electrical equipment. This article's first section focuses on contemporary digital electronic computers, including their structure, component parts, and purposes. The second section discusses computer history. For more details on computer architecture, software, and theory, see computer science.
Computer fundamentals
The majority of early computers' functions were numerical computations. Nevertheless, since each piece of information can be mathematically represented, people quickly understood that computers are capable of processing information for a variety of purposes. The range and accuracy of weather forecasting have increased as a result of their ability to process vast volumes of data. Due to their speed, mechanical devices like autos, nuclear reactors, and robotic surgical equipment may now be controlled as well as telephone connections that are being routed across a network. Additionally, they are affordable enough to be included in commonplace appliances, making rice cookers and dryers "smart." We can now ask and respond to questions that were previously impossible thanks to computers. These inquiries may relate to a word's use throughout all texts that have been saved in a database, consumer market activity patterns or DNA sequences in genes. Computers are becoming capable of operating while learning and adapting.
There are theoretical and practical constraints to computers as well. The logical design of a computer, for instance, is an example of an undecidable claim whose truth cannot be determined using a specific set of criteria. The "halting issue" refers to the situation when a computer is asked to determine the validity of a statement that cannot be identified by any universal algorithmic approach and will continue forever until forcibly stopped. a Turing machine Other restrictions are due to the state of technology. Human minds are adept at identifying spatial patterns—for example, swiftly differentiating between human faces—but this is a challenging challenge for computers, which must process information sequentially rather than quickly absorbing the big picture. Natural language interactions provide another challenge for computers. Researchers have yet to find a solution to the issue of giving pertinent information to general-purpose natural language computers since so much common knowledge and contextual information is presupposed in everyday human conversation.
a digital computer
Quantitative data is represented by continuous physical magnitudes in analog computers. Initially, they employed mechanical components to represent values (see differential analyzer and integrator), but after World War II, they switched to using voltages. By the 1960s, digital computers had entirely superseded them. However, during the 1960s, analog computers and a few hybrid digital-analog systems were still in use for jobs like simulating airplanes and space travel.
One benefit of analog computation is that designing and constructing an analog computer to handle a specific issue may be rather straightforward. The ability of analog computers to describe and solve problems in "real-time," or at the same rate as the system they are modeling, is another benefit. Their primary drawbacks are that general-purpose devices are expensive and difficult to design, and analog representations have low accuracy (usually a few decimal places but less in complicated processes).
Mainframe computer
International Business Machines Corporation (IBM), Unisys, and other businesses produced enormous, pricey computers with increasing processing capability in the 1950s and 1960s. They were often the only computer in the business and were employed by large enterprises and government research facilities. Early IBM computers were nearly exclusively leased rather than purchased, and the biggest IBM S/360 computer cost several million dollars in 1964. In 1959, the IBM 1401 computer was rented for $8,000 per month.
Mainframes are the name given to these computers, however, the phrase did not become widely used until smaller computers were created. Mainframe computers were distinguished by having vast storage capacities, quick components, and potent processing capabilities (for their time). They were extremely dependable and occasionally manufactured with redundant components that allowed them to withstand partial failures since they regularly serviced crucial demands in an organization. They were run by a crew of systems programmers who had exclusive access to the computer since they were complicated systems. "Batch jobs" were submitted by other users and were executed one by one on the mainframe.
Although they are no longer the only or even the major central computing resource of an enterprise, which would often contain hundreds or thousands of personal computers, such systems are still crucial today (PCs). Nowadays, mainframes may execute applications for hundreds or thousands of users at once thanks to time-sharing techniques or provide high-capacity data storage for Internet servers. These computers are now referred to as servers rather than mainframes due to their current responsibilities.
Supercomputer
Supercomputers are the common name for today's most potent computers. Their employment has traditionally been restricted to high-priority calculations for government-sponsored research, such as nuclear simulations and weather prediction. They are also quite costly. Many of the computing methods used in the first supercomputers are still widely used in PCs today. On the other hand, the usage of massive arrays of commodity processors (ranging from a few dozen to over 8,000) running in parallel via a high-speed communications network has replaced the construction of expensive, special-purpose processors for supercomputers.
Minicomputer
Minicomputers have been around since the early 1950s, although the phrase wasn't used until the middle of the 1960s. Minicomputers, which were often employed in a single department of an enterprise and frequently devoted to one task or shared by a small group, were relatively compact and affordable. Minicomputers typically had limited processing capability, but they worked well with many industrial and laboratory instruments for data collection and input.
With its Programmed Data Processor, Digital Equipment Corporation (DEC) was one of the most significant minicomputer manufacturers (PDP). DEC's PDP-1 sold for $120,000 in 1960. Five years later, its PDP-8, which cost $18,000 and sold more than 50,000 units, was the first widely used minicomputer. More than 650,000 units of the DEC PDP-11, which was released in 1970, were sold. It was available in several variants that ranged in size from tiny and affordable to manage a single industrial process to huge and shared in university computer centers. But in the 1980s, the microcomputer surpassed this market.
Microcomputer
A microcomputer is a compact computer designed around a chip known as a microprocessor. Microcomputers (and subsequently minicomputers as well) employed microprocessors that integrated hundreds or millions of transistors on a single chip, in contrast to the early minicomputers that used discrete transistors in place of vacuum tubes. The Intel Corporation created the first microprocessor in 1971, the Intel 4004, which was capable of performing computer-like tasks despite being designed for use in a Japanese calculator. The Intel 8080 microprocessor, which replaced the Altair as the first personal computer, was used in 1975. Early microcomputers, like minicomputers, had very limited data handling and storage capabilities, but they have increased as processor power and storage technology have advanced.
Microprocessor-based scientific workstations and home computers were frequently distinguished in the 1980s. The former made advantage of the most potent microprocessors on the market and featured expensive, high-performance color graphics capabilities. Engineers and scientists both employed them for computer-aided engineering and data visualization. The difference between a workstation and a PC nowadays is essentially nonexistent because PCs can do workstation-level tasks and show data.
Internal processors
The embedded processor is yet another type of computer. These tiny computers have basic microprocessors that manage mechanical and electrical operations. They often do not need to do complex calculations, run quickly, or have a high "input-output" capability, therefore they can be cheap. Automobiles, large and small home appliances, and industrial automation are all controlled by embedded processors, which are also widely used in aviation. A specific kind, the digital signal processor (DSP), has surpassed the microprocessor in popularity. Wireless telephones, digital telephones, and cable modems, as well as some audio equipment, all employ DSPs.
Computer hardware
The main memory, also known as random-access memory (RAM), peripherals, and the central processor unit (CPU) are the three primary components of a computer's hardware. All types of input and output (I/O) devices, including keyboards, monitors, printers, disk drives, network connections, scanners, and more, are included in the final class.
The central processing unit (CPU) and random access memory (RAM) are integrated circuits (ICs), which are composed of a tiny silicon wafer or chip with hundreds or millions of transistors. One of Intel's founders, Gordon Moore, declared what is now known as Moore's law in 1965: the number of transistors on a device doubles roughly every 18 months. Moore predicted that his law would eventually fail due to financial limits, yet it has been amazingly accurate for a lot longer than he anticipated. Given that transistors would need to contain just a few atoms apiece between 2010 and 2020 when the principles of quantum physics predict that they will no longer work reliably, it now looks that technical limitations may eventually invalidate Moore's law.
Central processing unit
The CPU supplies the circuits needed to implement the machine language, or set of instructions, on the computer. It is made up of control circuits and an arithmetic-logic unit (ALU). The control section sets the order of operations, including branch instructions that move control from one area of a program to another, while the ALU performs fundamental arithmetic and logic functions. Despite originally being viewed as a component of the CPU, main memory is now seen as an independent entity. However, the lines blur, and today's CPU chips also have a small amount of high-speed cache memory where information and instructions are temporarily stored for quick access.
The ALU includes circuits for logic operations like AND and OR (where a 1 is read as true and a 0 as false, such that, for example, 1 AND 0 = 0; see Boolean algebra) as well as circuits for adding, subtracting, multiplying, and dividing two arithmetic numbers. The ALU includes a few to more than one hundred registers that serve as temporary storage for computation results before transfer to main memory or further arithmetic operations.
Branch instructions are provided by the circuits of the CPU control section, which make simple judgments about what instruction to execute next. A branch instruction may say, for instance, "Jump to point A in the program if the outcome of the most recent ALU operation is negative; otherwise, proceed with the Such instructions allow a computer to make "if-then-else" decisions and execute a sequence of instructions, like a "while-loop" that repeats a set of instructions as the following instruction. long as a certain condition is fulfilled. The subroutine call is a similar instruction that delegated control to a subprogram before returning control, where it left off, to the main program.
Programs and data in memory are indistinguishable in a stored-program computer. Both are bit patterns—strings of 0s and 1s—that the CPU fetches from memory and can be interpreted as either data or program instructions. The memory address (location) of the upcoming instruction is stored in a program counter on the CPU. The "fetch-decode-execute" cycle is the fundamental way that CPUs work.
The instruction should be retrieved and stored in a register after being located at the address kept on the program counter.
Decipher the directive. It is divided into sections that specify the operation to be performed and the data on which it is to be applied. These could be in memory locations or CPU registers. If the instruction is a branch, a portion of it will include the memory location of the subsequent instruction to be executed when the branch condition is met.
If any, retrieve the operands.
If the operation is an ALU operation, carry it out.
If there is a result, keep it (either in memory or a register).
Update the program counter so that it contains the position of the next instruction, which is either the following memory location or the address indicated by a branch instruction.
The cycle is now ready to start over, and it keeps going until a particular halt instruction causes it to stop.
A clock with a high-frequency oscillation controls all internal CPU functions as well as the steps of this cycle (now typically measured in gigahertz, or billions of cycles per second). The "word" size, or the quantity of bits that are simultaneously read from memory and used by CPU instructions, is another element that influences speed. Currently, digital words are 32 or 64 bits in size, however, sizes from 8 to 128 bits are also present.
Because there may be several program instructions available and waiting for execution, processing instructions sequentially or one at a time frequently results in a bottleneck. CPU design has been influenced by a trend known as reduced-instruction-set computing since the early 1980s (RISC). Since only data in CPU registers is used for all ALU operations, this design reduces data transfers between memory and the CPU and necessitates the use of short, snappy instructions. The RISC design only needs a tiny amount of the CPU chip to be allocated to the fundamental instruction set due to the increase in transistor density. The remaining portion of the chip can then be utilized to accelerate CPU processes by including circuitry that enables several instructions to run in parallel.
The CPU uses two main types of instruction-level parallelism (ILP), both of which were first developed for usage in supercomputers. One is the pipeline, which enables many instructions to be executed simultaneously throughout the fetch-decode-execute cycle. A third instruction can be decoded, a fourth instruction can be acquired from memory, and a fifth instruction can get its operands all while the first instruction is being performed. If the execution time for each of these procedures is the same, a new instruction can enter the pipeline at each stage, and (for instance) five instructions might be finished in the same amount of time as one without a pipeline. The second type of ILP involves the CPU having numerous execution units, including duplicate arithmetic circuits and specialized circuits for graphics instructions or floating-point computations (arithmetic operations involving noninteger numbers, such as 3.27). This "superscalar" architecture allows for the simultaneous execution of many instructions.
Complications exist for both ILP types. If preloaded instructions reach the pipeline before a branch instruction jumps to a new section of the program, they may be rendered worthless by the branch instruction. Since two arithmetic operations cannot be performed at the same time, superscalar execution must also decide if an operation depends on the outcome of another operation. To forecast which branch will be chosen and to examine instructional relationships, CPUs now incorporate extra circuitry. These are now quite complex and routinely reorder instructions to carry out more of them concurrently.
key memory
Mercury delay lines, which were mercury-filled tubes that stored data as ultrasonic waves, and cathode-ray tubes, which stored data as charges on the screens of the tubes, were the first kind of computer main memory. The magnetic drum was created in the late 1940s and employed an iron oxide coating on a revolving disk to store magnetic patterns for data and programs.
Any device that can be placed in either of two states, or that is bistable, may represent the two potential bit values of 0 and 1, and can thus function as computer memory in a binary computer. The first comparatively affordable RAM device, magnetic-core memory, debuted in 1952. It was made up of small, doughnut-shaped ferrite magnets that were threaded on a two-dimensional wire grid's intersections. A third wire that was threaded through the doughnut sensed the magnetic orientation of each core while the first two wires carried currents to modify the direction of each core's magnetization.
In 1971, the initial integrated circuit (IC) memory chip debuted. A transistor-capacitor pair in IC memory stores a bit. The transistor shifts the capacitor between these two states such that it represents a 1 by holding a charge and a 0 by holding no charge. Since a capacitor's charge constantly degrades, IC memory is dynamic RAM (DRAM), which necessitates routine value updates (every 20 milliseconds or so). Additionally, static RAM (SRAM) exists and is not updated. SRAM is used largely for CPU internal registers and cache memory, even though it is quicker than DRAM and more expensive due to the additional transistors it requires.
Computers often include separate video memory (VRAM) in addition to their main memory, which is used to store bitmaps—graphical images—for the computer's display. This memory frequently has two ports, allowing for simultaneous storage of a fresh picture and the reading and display of its existing data.
Since it takes time to define an address in a memory chip and since memory is slower than a CPU, a memory that can transport a string of words quickly after the initial address has been supplied has an advantage. Synchronous DRAM (SDRAM), one such architecture, attained widespread use by 2001.
However, the "bus," or the network of wires connecting the CPU to memory and other peripherals, is a bottleneck for data transport. Because of this, CPU chips now have cache memory, which is a tiny quantity of quick SRAM. Data copies from the main memory blocks are kept in the cache. Up to 85–90% of memory accesses may be made from a well-designed cache in normal applications, significantly speeding up data access.
Early core memory had a cycle time between two memory reads or writes of around 17 microseconds (millionths of a second) and early core memory had a cycle time of about 1 microsecond. The cycle time of today's DRAMs is 20 nanoseconds or less, compared to the original DRAM's 500 nanoseconds, or almost half a microsecond. The price per memory bit is another crucial metric. The first DRAM cost roughly $10, or $80,000 per megabyte, and could store 128 bytes (1 byte = 8 bits) (millions of bytes). DRAM was available in 2001 for less than $0.25 per megabyte. Graphical user interfaces (GUIs), the display typefaces used by word processors, and the analysis and presentation of massive amounts of data by scientific computers were all made feasible by this enormous decrease in cost.
Secondary memory
Data and applications not currently in use are stored in secondary memory on a computer. Early computers employed magnetic tape as supplementary storage in addition to punched cards and paper tape. The drawback of tape is that it must be read or written sequentially from one end to the other. Tape is inexpensive whether it is in the form of small cassettes or huge reels.
In 1955, IBM unveiled the RAMAC, a 5-megabyte magnetic disk that cost $3,200 a month to rent. Similar to tape and drums, magnetic disks are platters covered in an iron oxide layer. The read/write (R/W) head, an arm with a tiny wire coil, travels radially across the disk, which is split into concentric tracks made up of tiny arcs, or sectors, containing data. Similar to how a little current in the coil can cause a local magnetic shift in the disk, a sector may be "written" to bypass across magnetized parts of the disk that create small currents in the coil as it passes. The disk rotates quickly (15,000 rotations per minute on average), allowing the R/W head to quickly access any sector on the disk.
Large detachable platters were a feature of early disks. Since the initial ones featured two 30-megabyte platters, evoking the Winchester 30-30 rifle, it's possible that IBM launched sealed disks with fixed platters known as Winchester disks in the 1970s. In addition to being dirt-resistant, the sealed disk also allowed the R/W head to "fly" extremely close to the platter on a thin air layer. The area of oxide film that represented a single bit could be significantly smaller by moving the head closer to the platter, which increased storage capacity. These fundamental tools are still in use.
To improve storage and data transmission rates, many platters—10 or more—have been combined in a single disk drive with a pair of R/W heads for each of the platter's two sides. The distribution of data on the disk has become denser as a result of better control over the disk arm's radial motion from track to track. A platter the size of a coin could contain almost a gigabit of data in 2002 when such density had reached over 8,000 tracks per centimeter (20,000 tracks per inch). An 80-gigabyte disk cost roughly $200 in 2002, which is just a tenth of what it did in 1955 and represents a nearly 30 percent yearly drop, similar to the decline in the cost of main memory.
The mid-1980s and early 1990s saw the introduction of optical storage technologies like CD-ROM (compact disc, read-only memory) and DVD-ROM (digital videodisc, or versatile disc). Both of them depict bits as small plastic holes arranged in a helical spiral like a phonograph record and recorded and read using lasers. A CD-ROM may store up to 2 gigabytes of data, but only 650 megabytes of that is really accessible due to error-correcting codes that are included to remove dust, tiny flaws, and scratches. With mistake correction, DVDs have smaller pits, are denser, and have a storage capacity of 17 gigabytes.
Although slower than magnetic disks, optical storage systems are ideal for creating master copies of software or for multimedia (audio and video) data that must be read in order. Additionally, readable and rewritable CD-ROMs (CD-R and CD-RW) and DVD-ROMs (DVD-R and DVD-RW) are available for low-cost data storage and sharing.
New applications continue to be made possible by memory's declining cost. 100 million words may be stored on a single CD-ROM, which is more than twice as many words as the printed Encyclopaedia Britannica has. A feature-length movie can fit on a DVD. To handle data for computer simulations of nuclear processes, astronomical data, and medical data, including X-ray pictures, bigger and quicker storage devices, such as three-dimensional optical media, are being created. Such applications frequently demand several terabytes of data (1 terabyte = 1,000 gigabytes), which might make indexing and retrieval more challenging.
Peripherals
Computer peripherals are tools that allow users to enter data and instructions into computers for archival or processing and then output the results. Additionally, peripherals are frequently categorized as tools that allow data to be sent and received between computers.
input gadgets
The category of input peripherals includes a vast array of devices. Keyboards, mice, trackballs, pointing devices, joysticks, digital tablets, touch pads, and scanners are typical examples.
When pushed, mechanical or electromechanical switches on keyboards alter the current that passes through them. These modifications are interpreted by a microcontroller built into the keyboard, which then alerts the computer. The majority of keyboards also have "function" and "control" keys, which allow users to alter input or issue unique commands to the computer, in addition to letter and number keys.
A rubber or rubber-coated ball that rotates two shafts attached to a pair of encoders that measure the horizontal and vertical components of a user's movement and then converts those movements into cursor movement on a computer monitor is how mechanical mice and trackballs work. Optical mice convert mouse movement into cursor movement by using a light beam and camera lens.
On many laptop systems, pointing sticks use a method that makes use of a pressure-sensitive resistor. The resistor increases the flow of electricity as a user presses down on the stick, indicating that movement has occurred. The majority of joysticks work similarly.
Touchpads and digital tablets both serve comparable functions. Both times, input is received from a flat pad with electrical sensors that can distinguish between the presence of a specific tablet pen and a user's finger.
Scanners resemble photocopiers in certain ways. The item to be scanned is illuminated by a light source, and the various reflections of light are recorded and measured by an analog-to-digital converter connected to light-sensitive diodes. The binary digit pattern that the diodes produce is saved in the computer as a graphical representation.
User-multiple systems
In the 1960s, multiuser or time-sharing systems—the development of multiprogramming systems—were created. Time-sharing from Project MAC through UNIX provides a history of this evolution (for more information, check that section.) With time-sharing, many users can use a computer simultaneously while each receives a little amount of the CPU's time. Because a computer may carry out several tasks while waiting for each user to finish inputting the most recent orders, if the CPU is fast enough, it will appear to be dedicated to each user.
Like the majority of single-user operating systems today, multiuser operating systems use a concept called multiprocessing, or multitasking, in which even a single application may be composed of several distinct computing operations, known as processes. The allocation of additional resources, such as peripheral devices, as well as the active and queued processes, as well as the times when each process needs access to secondary memory to receive and store its code and data, must all be monitored by the system.
Early operating systems had to be as compact as possible to accommodate other applications since main memory was extremely constrained. Operating systems employ virtual memory, one of many computer techniques created in the late 1950s at the University of Manchester in England under the guidance of Tom Kilburn, to partially get around this issue. Each process has access to a vast address space, which is frequently considerably bigger than the main memory itself. This address space is stored in secondary memory (such as tape or disks), and when a process is active, sections of it are transferred into main memory, updated as needed, and returned. However, portions of the operating system's "kernel" must stay in the main memory even with virtual memory. Because primary memory is becoming less expensive, UNIX kernel sizes have increased from the tens of kilobytes they once occupied to more than a megabyte nowadays. Modern CPUs have specialized registers to make this procedure more effective. Operating systems must keep track of where each process's address space is located using virtual memory tables, which are maintained by operating systems. In fact, a large portion of an operating system is made up of tables, including those that list processes, files, their locations (directories), resources consumed by individual processes, and more. Additionally, there are tables of user accounts and passwords to assist manage access and safeguarding the user's files against unauthorized or harmful interference.
slender systems
For compact, low-cost, specialized devices like personal digital assistants (PDAs), "smart" cellular telephones, portable devices for listening to compressed music files, and Internet kiosks, it has been crucial to minimize the memory needs of operating systems. These gadgets must be extremely dependable, quick, and secure against theft or corruption; a mobile phone that "freezes" in the middle of a call is not acceptable. One may argue that these qualities ought to be present in any operating system, yet PC users appear to have become accustomed to frequent restarts due to operating system faults.
Reactive mechanisms
Real-time or embedded systems have even fewer capabilities. These are compact systems that manage embedded control processors in machinery found in everything from household appliances to factory production lines. They engage with their surroundings, absorbing information through sensors and responding appropriately. Embedded systems are classified as "soft" real-time systems if missed deadlines do not have a disastrous effect but "hard" real-time systems if they must ensure schedules that manage all events, even in the worst situation. An airplane control system must operate in hard real-time since even one mistake in flight might be deadly. On the other hand, an airline reservation system is a soft real-time system because a missed reservation is rarely disastrous.
Modern CPUs and operating systems include several characteristics that are unsuited for harsh real-time systems. For instance, when a branch prediction fails and a pipeline is overflowing with extraneous instructions, pipelines, and superscalar multiple execution units offer great performance at the price of sporadic delays. Similar to physical memory, virtual memory and caches often provide quick memory access, although occasionally they can be sluggish. Given that this uncertainty makes it difficult to achieve strict real-time deadlines, embedded CPUs and their operating systems must typically be straightforward.
approaches to operating system design
Operating systems can be either open or proprietary. The majority of mainframe systems have been proprietary and provided by the computer maker. There are only a limited number of PC operating systems available, including Microsoft's exclusive Windows systems and Apple's Mac OS for its range of Macintosh computers. The most well-known open system is UNIX, which was created by Bell Laboratories and made available for free to academic institutions. It is accessible for a variety of PCs, workstations, and, most recently, IBM mainframes in its Linux edition.
Although open-source software is protected by copyright, its creator sometimes permits unrestricted use, which may include the ability to alter it. Like all the other software in the large GNU project, Linux is protected by the Free Software Foundation's "GNU General Public License," and this protection allows users to alter Linux and even sell copies as long as the right of free use is upheld in the copies.
The fact that many writers have contributed to the GNU-Linux effort and added several beneficial components to the fundamental system is one effect of the right of free use. Even though quality control is administered voluntarily and despite some predictions that Linux would not withstand extensive commercial usage, it has been astonishingly successful and appears to be on track to replace UNIX on mainframes and on PCs used as Internet servers.
Other UNIX system variations exist; some are proprietary, but the majority are now freely used, at least for noncommercial purposes. They all offer a graphical user interface of some kind. Despite being a proprietary operating system, Mac OS X is based on UNIX.
Highly integrated systems are offered by proprietary systems like Windows 98, 2000, and XP from Microsoft. For instance, all operating systems offer file directory services, but on a Microsoft system, a directory may use the same window display as a web browser. The use of Windows capabilities by nonproprietary software is made more challenging by such an integrated strategy, a fact that has been a point of contention in antitrust cases against Microsoft.
Networking
Computer communication can take place via radio broadcasts, optical fibers, or cables. Shielded coaxial cable, like the one connecting a television to a VCR or an antenna, may be used in wired networks. They may alternatively employ less complicated unshielded wiring with telephone-style modular connections. Optical fibers are being employed over greater distances as telephone companies upgrade their networks since they can carry more signals than wires and are frequently used for connecting buildings on a college or business campus. Additionally, computer network communications are sent using microwave radio, typically as part of long-distance telephone networks. For wireless networks within a building, low-power microwave radio is increasingly popular.
Local area networks
Computers within a single building or a small cluster of buildings are connected through local area networks (LANs). A LAN can be set up as one of three different types of connections: a bus, the main channel to which nodes or secondary channels are connected in a branching structure, and a ring, in which each computer is connected to two other computers nearby to form a closed circuit, or a star, in which each computer is linked directly to a central computer but only indirectly to its neighbors. Although the bus arrangement has grown to be the most popular, each of them offers merits.
Even if there are only two linked computers, they must adhere to rules, or protocols, to interact. One may indicate "ready to send" and then wait for the other to signal "ready to receive," for instance. A rule like "speak only when it is your turn" or "don't talk when anyone else is talking" may be included in the protocol when numerous machines are connected to the same network. Additionally, protocols must be built to manage network faults.
Since the middle of the 1970s, the bus-connected Ethernet, which was first created at Xerox PARC, has been the most widely used LAN architecture. A 48-bit address is assigned to each computer or other device connected to an Ethernet. Any computer that wishes to communicate keeps an eye out for a carrier signal that signifies the start of a transmission. If it doesn't find any, it starts transmitting and sends the recipient's address right away. Each message is received by every machine on the network, but those not addressed to it are ignored. A system listens as it transmits, and if it hears another transmission, it pauses, waits for an unpredictable amount of time, and attempts again. The likelihood that they will collide again is decreased by the random time delay before attempting again. Carrier sense multiple access with collision detection (CSMA/CD) is the name of this protocol. Performance performs admirably up until a network is just moderately busy, at which point it deteriorates as collisions happen more frequently.
Today, 10- and 100-megabit-per-second Ethernet is prevalent, with gigabit-per-second Ethernet also in use. The earliest Ethernet had a bandwidth of roughly 2 megabits per second. PC Ethernet transceivers (transmitter-receivers) may be deployed quickly and affordably.
Wi-Fi, a more contemporary wireless Ethernet standard, is increasingly used in networks in small offices and homes. These networks can carry data at speeds of up to 600 megabits per second using frequencies between 2.4 and 5 gigahertz (GHz). Another Ethernet-like standard was published in the first half of 2002. The first HomePlug device could transport data over an existing building's electrical infrastructure at a rate of roughly 8 megabits per second. One gigabit per second speed could be possible with a later version.
Broadband networks
Wide area networks (WANs), which often use telephone lines and satellite communications, connect cities, nations, and the whole world. Multiple WANs are connected via the Internet, which is a network of networks, as its name implies. Its success is due to early backing from the US Department of Defense, which created its predecessor, ARPANET, to facilitate easy communication and resource sharing among researchers. Its adaptable communication style is also a contributing factor to its success. The most important advancement in computing over the previous few decades may have been the Internet's emergence as a communication tool and one of the primary applications of computers in the 1990s. See the Internet for further information on the background and specifics of Internet communication protocols.
computer programs
Computer programs are referred to as software. It is generally accepted that John Tukey, a statistician at Princeton University and Bell Laboratories, invented the phrase in 1958. (along with creating the term "bit" for a binary digit). Initially, when people talked about "software," they were largely referring to what is now known as "system software"—an operating system and the utility programs that are included with it, such as those that compile (translate) programs into machine code and load them to run. When a computer was purchased or rented, this software was also included. Software quickly became a significant source of income for manufacturers as well as specialized software companies once IBM made the decision to "unbundle" and offer its software separately in 1969.
What is Computer Science?
The study of computers and computing systems is known as computer science. Compared to electrical and computer engineers, computer scientists are more interested in software and software systems, including their theory, design, development, and use.
Some of the key areas include programming languages, software engineering, bioinformatics, security, database systems, human-computer interaction, vision and graphics, artificial intelligence, computer systems and networks, and computing theory. of study in computer science.
Even while programming is a necessity for computer science studies, it is only one facet of the discipline. Along with inventing and studying approaches to solve programs, computer scientists also examine the performance of computer hardware and software. The challenges that computer scientists face vary from the abstract—determining which issues can be handled by computers and the complexity of the algorithms that do so—to the concrete—creating programs that comply with security standards, are user-friendly, and operate effectively on mobile devices.
Graduates of the computer science program at the University of Maryland are lifelong learners who can easily adjust to this demanding field.
The Top Software Skills for a Resume in 2022 are Computer Skills
1. Why Computer Skills Are Important
The knowledge and set of skills that enable you to utilize computers and modern technologies successfully are known as computer skills (or computer literacy). Accessing the Internet, using word processing software, managing files, and creating presentations are examples of basic computer abilities. Accessing databases, knowing how to use complex spreadsheets, and programming are examples of advanced computer abilities.
The bulk of technical talents that employers look for in workers is computer-related.
For instance, a recent poll by LinkedIn found that some of the most in-demand computer talents are related to cloud and distributed computing, statistical analysis and data mining, data presentation, and marketing campaign management. Let's examine the most crucial computer abilities in more depth.
2. A resume's computer skills section
The most well-liked and practical computer abilities to mention on a resume are shown in the lists below. Basic and advanced skills are covered.
The lists of fundamental computer skills cover the skills and programs that the majority of job applicants should at the very least be passingly familiar with. The listings of advanced computer abilities concentrate on software solutions and more specialized skill sets.
These lists can be used to acquaint yourself with the range of computer talents available or as a master list to assist you to decide which abilities to highlight on your resume.
3. How to Include Computer Skills on a Resume
Straight to the point
In the 250 resumes the other applicants submitted, yours has to stand out.
To do this, you must be well aware of the recruiter's requirements. You won't be able to highlight the appropriate computer abilities until then.
the positive news
It's not necessary to possess clairvoyance. Right in front of you is the ideal cheat sheet for what the recruiter wants.
It's known as a job offer.
Yes. You can see in the job offer itself exactly what computer knowledge and expertise the recruiter is seeking.
All you have to do is learn how to describe your PC in the job offer. and skills on a resume.
Major Points
What you need to know about listing computer talents on your resume is as follows:
To determine what you can offer businesses, compile a comprehensive list of your computer talents.
Use the initial job offer to determine the qualifications that the recruiters are looking for (use the same resume keywords).
Put your computer talents into the experience, essential skills, and profile parts of your resume to make them more noticeable.
You may take online classes to sharpen your computer abilities if you feel they need work.
Computer vision: What is it?
Artificial intelligence (Anfield )'s of computer vision enables computers and systems to extract useful information from digital photos, videos, and other visual inputs and to conduct actions or offer suggestions in response to that information. Computer vision offers machines the ability to perceive, observe, and understand, just like artificial intelligence gives them the capacity to think.
Human vision has an advantage over computer vision in that it has been around longer. With a lifetime of context, human sight has the benefit of learning how to distinguish between things, determine their distance from the viewer, determine whether they are moving, and determine whether a picture is correct.
With cameras, data, and algorithms instead of retinas, optic nerves, and a visual brain, computer vision teaches computers to execute similar tasks in much less time. Since a system trained to verify goods or monitor a production asset may evaluate hundreds of items or processes every minute while detecting undetectable anomalies, it can quickly outperform people. flaws or problems.
Energy, utilities, manufacturing, and the automobile industries all employ computer vision, and the industry is still expanding. By 2022, it is anticipated to reach USD 48.6 billion.
How is computer vision implemented?
Computer vision requires a lot of data. It runs data analysis repeatedly until it can recognize photos and make distinctions between objects. For instance, a computer has to be fed a huge amount of tire photos and tire-related things to be trained to detect automotive tires. Particularly with regards to tires with no defects.
With the use of algorithmic models, a computer may learn how to understand the context of visual input using machine learning. The computer will "look" at the data and educate itself to distinguish between different images if enough data is sent through the model. Instead of needing to be programmed to recognize a picture, algorithms allow the computer to learn on its own.
With the use of algorithmic models, a computer may learn how to understand the context of visual input using machine learning. The computer will "look" at the data and educate itself to distinguish between different images if enough data is sent through the model. Instead of needing to be programmed to recognize a picture, algorithms allow the computer to learn on its own.
A CNN facilitates the "seeing" capabilities of a machine learning or deep learning model by breaking down images into pixels with labels or tags. By applying convolutions to the labels—a mathematical procedure on two functions to produce a third function—it generates predictions about what it is "seeing." Until the predictions start to come true, the neural network conducts convolutions and evaluates the accuracy of its predictions repeatedly. Then, it is identifying or seeing pictures similarly to how people do.
A CNN first detects strong shapes and fundamental forms before adding details as it repeatedly verifies its predictions, much as how a person might see a picture from a distance. To comprehend individual pictures, a CNN is utilized. Like this, recurrent neural networks (RNNs) are employed in video applications to assist computers in comprehending the relationships between the images in a sequence of frames.
computer vision software
The topic of computer vision is undergoing a lot of studies, but it goes beyond that. Applications in real-world settings show how crucial computer vision is to activities in business, entertainment, transportation, healthcare, and daily living. The influx of visual data coming from smartphones, security systems, traffic cameras, and other visually instrumented devices is a major factor in the expansion of these applications. Although it isn't currently used, this data might be very important to operations in many different businesses. The data establishes a training ground for computer vision programs and a starting point for their integration into a variety of human endeavors:
- For the 2018 Masters golf event, IBM created My Moments using machine vision. After watching hundreds of hours of Masters video, IBM Watson was able to recognize the sights (and sounds) of key shots. These significant events were selected, and they provided viewers with tailored highlight clips of them.
- By pointing a smartphone camera at a sign in another language, users may use Google Translate to get a translation of the sign in their favorite language practically instantly.
- Computer vision is used in the development of self-driving cars to interpret the visual data from a car's cameras and other sensors. Identification of other vehicles, traffic signs, lane markings, bicycles, pedestrians, and any other visual elements encountered on the road is crucial.
- To deliver sophisticated AI to the edge and assist automobile makers in identifying quality flaws before a vehicle leaves the plant, IBM is implementing computer vision technologies with partners like Verizon.
The definition of computer engineering
The phrase refers to several related occupations. The two broad categories of engineering would be hardware and software. The third category may be network engineering. Computer engineers can select from a variety of degrees to get the precise set of abilities they want to hone.
There are various ways to define the term "computer engineering," but one of them is "a professional with experience in network, systems, and software engineering." A person with a background in electrical engineering can also be referred to as having computer engineering. People that take satisfaction in becoming computer engineers are knowledgeable in computer science and can work on both hardware and software development projects. Computer engineers can choose from a wide range of degrees that provide them access to several IT specialties.
What does an engineer in computers do?
So what does an engineer in computers do? Depending on their preferred job, yes. On the surface:
- Computer programming, mobile app development, and software development, in general, are the purview of software engineers.
- Hardware engineers create and maintain physical things.
- Systems and networks are designed and maintained by network engineers.
No matter what profession you choose, you may anticipate a solid wage. The IT industry is constantly growing, with new academic disciplines emerging daily rather than yearly (or even less frequently). Businesses from all industries use both software and hardware specialists. Every computer engineer may select the precise business type to work with, whether it be a big, small, local, or multinational corporation or they can go online. It's not only that remote software engineering positions seem handy that they are so well-liked. When it comes to working hours and responsibilities, they often don't differ significantly from those in your neighborhood. The flexibility to work from home, experience working abroad, and enhanced career opportunities are their key advantages.
What computer engineers perform is very diverse given the entire spectrum of degrees that are available to people who want to become computer engineers. Because computer engineer uses their own set of abilities to carry out tasks, there is no one correct response. Hardware engineers work on maintaining computers, whereas software engineers experiment with programming and app creation. On a different level, hardware engineers maintain the systems that network engineers develop. It's a sector with a full circle of opportunities.
Is Computer Engineering a Reputable Profession?
The variety and wide range of employment opportunities accessible to computer engineers are one of the finest aspects of the profession. There will always be jobs, no matter what field of education you pursue. Higher education, whether it be a degree or additional certifications, is required to become a computer engineer. Given that you have invested a lot of time, money, and effort into your profession to get where you are now, this might carry some prestige.
Do employers need computer engineers?
In the broad scheme of technology, computer engineers are in extremely high demand everywhere they work, and their wages amply reflect that. Today's world is digital, therefore all you have to do to find out what computer engineering does is look around your own home. Your fiber optic cables, WiFi connections, and computers are all maintained by engineers, who also repair your broken computers. It's easy to forget that in today's society we still need computer engineers!
What Computer Engineering Is: An Unraveled Question
What, once more, is computer engineering? At first glance, it could seem like a nerdy job, yet nothing could be farther from reality. Engineering is a state of mind, an immaculate mind that solves all problems immediately, aids management in making just the best choices, and inspires team members to smoothly advance their expertise.