Tuesday, June 4, 2019

Evolution of Computer Technology in Last 25 Years

Evolution of figurer Technology in Last 25 YearsThe advancement of the computing technology could commonly identify in 6 generations. The physical size of the computing machines importantly decreased from the first generation vacuum tube figurers to third generation computers based on the integrated cow dung technology. Fourth and fifth generation computer technology increase computer chips efficiency by developing the precise bombastic scale integration (VLSI) and ultra large scale integration (ULSI) technology. (Halya, 1999) During the fifth generation computing, the idea of using multiple computer chips to solve the same problem flourished, which was based on the originally design of parallel computing that was developed during the fourth generation. With the amendment of hardware, increase network bandwidth, and developing more(prenominal) efficient algorithms, massively parallel architectures allowed fifth generation computers to increase the efficiency of computing si gnificantly. (Drako, 1994) This research paper is mainly going to discuss how the computer technology evolved from the end of the fifth generation to current day sixth generation computers.The improvement in microprocessor chips technology allowed millions of transistors to be placed on a single integrated chip, which opened the generation of computers based on ultra large scale integration, or ULSI. The 64-bit microprocessor was developed during this time and became the fifth generation chip we mostly use today. Even the older fourth-generation chip architecture concepts like Reduced Instruction Set Computers (reduced instruction set computer) and Complex Instruction Set Computers ( complex instruction set computing) derived the benefit of ULSI technology. During the fourth generation period, microprocessors were commonly classified into RISC or complex instruction set computing type architectures. The difference between RISC and CISC were very clearly distinguishable. RISC has a very simple set of instructions which required a low number of transistors but needed a higher memory to do a task. CISC has more instructions set available compared to RISC which required more transistors but less memory space. (Hennessy, 1991) Due to the limited computing resources, each programmer decided the specific chip type to renounce the endstate the application delivery. However, with the advancement of microprocessors, the 64-bit chip now has more transistors and memory address access available for computing. Today, the need of differentiating what used to be two main categories of the microprocessor is some pointless because of the level of complexity in modern day 64-bit chips for both CISC and RISC. Many new CISC chips behave like RISC with the increased processor clock cycle while the new RISC has increased number of instructions available like CISC (Cole, 2015).Two of the most classic hardware techniques used to improve performance during the fourth and fifth gene ration of computer development have been pipelining and caches. Both techniques rely on using more devices to achieve higher performance. Pipelining might have been available only to some mainframe computers and supercomputers during fourth generation computing however, the technique became very common indoors computer architecture during the fifth generation computing which became the baseline for the sixth generation computer which uses decentralized computing process to perform as an celluloid intelligence and neural network computing. Pipelining improves the throughput of a machine without changing the basic cycle time and increases performance by exploiting instruction-level parallelism. (Hennessey, 1991) Instruction-level parallelism is available when instructions in a sequence are independent and thus can be executed in parallel by overlapping. Unarguably, the pipelining technology led to faster speeds and soften performances but the hardware performance couldnt keep up wi th the demand of even faster hardware that could facilitate applications that required processing a large sum total of data or critical commercial transaction very fast. Addition to advances in pipelining, the advancement in cache memory technology also significantly enhanced performance of how computer access data. By creating a small pool of memory either in the actual processor or very close to it decreased the need of frequent access of data in a flash from the memory. This technique made cache memories one of the most important ideas in computer architecture. (Uri, 2010)Cache memories substantially improved performance by the use of memory. Cache memories were first used in the third-generation computers from the late 60s and early 70s, both in large machines and minicomputer. From the fourth-generation and on, virtually every microprocessor has included support for a cache. Although large caches can certainly improve performance, total cache size, associativity, and block si ze all directly impact the performance and have optimal values that depend on the details of a design. (Hennessey, 1991) Just like microprocessor and pipelining, the cache technology improved significantly last two decades. Traditional cache architectures are demand fetch, cache lines are only brought into the cache when they are explicitly required by the process. Prefetching increased the efficiency of this process by anticipating that some memory will be used near future, thus, proactively fetched into the cache. Earlier of prefetching was either done through software or hardware prefetching. As the complexity of prefetching increases, some more recent research has looked at combining the imprecise future knowledge available to the compiler with the enlarge run-time information available to hardware like programmable prefetching engine consisting of a run-ahead table that populates using explicit software instruction. (Srinivasan, 2011)With such advancement in core computer tech nologies, the ability to process data and store information truly became increasingly decentralized. From cloud to PC over IP technology, cheaper storage, faster processor, and higher bandwidth wide part network allowed the modern day computer to work in collaboration rather than isolation. If from the first generation to the fifth generation focused on alter the efficiency of the hardware to meet demands of software engineers, the current sixth generation is more about how human interacts with the computers to enrich human lives. Computers became smaller while smooth sufficient to process necessary application by itself or using servers through the internetwork. Everything has become smarter, faster, smaller, and connected. With the improved network and parallel computing, the sixth generation computers in spades getting closer to simulate how the human brain functions. Using basic algorithms, probability and statistic, and economic theories, new computer technology could simula te human-like decision-making process to improve human lives and help to solve more complex issues. In the sixth generation, we are actually experiencing the true potential of commercial Artificial Intelligence.ReferencesCole, Bernard, (2015). New CISC Architecture Takes on RISC. EE Times, Retrieved from http//www.eetimes.comDrako, Nikos, (1995) . An Overview Of Computational Science. The Computational Science Education ProjectHaldya, Micky, (1999). Computer Architecture. Biyanis Think Tanks Chap 5, 26 27Hennessy, John L. Jouppi,Norman P., (1991). Computer Technology and Architecture An Evolving Interaction.Computer, vol. 24, no., 18 29Srinivasan, James R., (2011). Improving Cache Utilization. Technical Report no 800., 31 35Uri, Cohen, (2010). From Caching to Space-based Architecture The Evolution of Memory. initiative System Journal. Retrieved from https//esj.com/

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.