·

Cursos Gerais ·

Inglês

Send your question to AI and receive an answer instantly

Ask Question

Preview text

July 2003 13 I N D U S T R Y T R E N D S A s processor performance memory networking stor age and other technologies have improved vendors have had to develop higherspeed local interconnects to ensure that over all system performance keeps pace For example todays faster proces sors need to receive instructions and data in a timely manner to process information efficiently Likewise todays faster PCs servers voice and datanetworking products such as routers and switches signal processing devices and other technologies need faster more functional IO This is particularly the case for machines handling data arriving via todays fast technologies such as multi Gigabit Ethernet Serial Attached small computer system interface Internet SCSI and Fibre Channel New interconnect technologies are especially important for computers processing todays streaming media games security processing including virus checking video editing voice recognition advanced encryption 3D animation and rendering and other applications that require high band width and dependable smoothly flow ing data transmissions Previous standard local IO tech nologies including PCI peripheral component interconnect cannot pro vide the necessary performance to offer the instant data availability that many systems need Users have tried work ing with multiple PCI buses but this adds cost power consumption and space usage Therefore vendors have upgraded the PCI standard and developed sev eral new approaches for internal expansion buses and chiptochip chiptoIO chipset component cou pling and chipsettoIO connectivity NEW APPROACHES Like PCI PCIX Extended and PCIExpress provide interconnections within the box via slots or internal expansion buses Rapid IO and HyperTransport are mezzanine connectors from a systems main processor or memory controller to another IO often a PCI bus within the system perhaps for an additional chip or component They also can con nect outside the box such as within a data center Meanwhile InfiniBand which indus try observers at one time thought would replace PCI is now seen as best for external interconnects between systems The new interconnect technologies offer more bandwidth and less latency All are pointtopoint packetbased approaches and reduce power con sumptionparticularly important for handheld devicesvia lower pin counts Lower pin counts also help maintain or even reduce chip size said John Beaton an interconnect program direc tor for Intel PCIExpress and HyperTransport have serial architectures PCIX has a parallel architecture Rapid IO has both serial and parallel architectures Thus there are new interconnects for working with the traditional parallel technology or the increasingly popular serial approach The older parallelbus technology provides high bandwidth and simplic ity and is familiar to many developers However it operates over shorter dis tances uses more power and requires more IO pins Serial IO approaches operate over longer distances and use fewer wires and pins as well as less power However they create more latency and require highperformance silicon and additional serializerdeserializer hardware PCIX PCI a sharedmemory technology has been an IO standard since 1992 Since then said Karl Walker chief tech nology officer of HewlettPackards Industry Standard Server Division PCI and its subvariants have been the dominant standard in the industry PCIX currently on version 20 is the dominant IO peripheral standard in servers It was approved in September 1999 by the PCISIG a spe cial interest group that administers PCI standards Compaq Computer HP which has since purchased Compaq and IBM jointly developed PCIX to increase IO performance for highbandwidth technologies such as Gigabit Ethernet They designed the technology largely The Ins and Outs of New Local IO Trends Linda Dailey Paulson 14 Computer I n d u s t r y T r e n d s The PCISIG approved PCIExpress in July 2002 The organization and key supporters such as Intel designed PCI Express once known as Third Generation IO 3GIO for use with serial technologies in various markets including computing and communica tions servers as well as handheld and embedded devices Intels Beaton said PCIExpress is taking PCIX and moving it from the bus to a highspeed switch intercon nect Proponents say the switched approach offers better performance than PCIs traditional sharedbus approach Unlike PCI which uses transistor transistor logic signaling PCIExpress uses lowvoltage differential signaling LVDS communicates using the voltage difference between two wires rather than the voltage in a single wire as is the case with TTL LVDS incurs less electrical interference so the system can more accurately identify signals PCIExpress currently moves data up to 250 Mbytes per second in each direc tion per lane a lane is a pair of two wires one for transmitting and one for receiving signals yielding 16 Gbytes per second in a typical 32lane config uration Users can add up to 32 more lanes to provide more bandwidth PCIExpress buses initially will run at 25 GHz Later implementations could reach 625 GHz with the poten tial to scale beyond 10 GHz in con junction with fiberbased technologies PCISIG chair Tony Pierce technical evangelist for Microsofts Hardware Strategy Group said PCIExpress also offers aggressive power management and hotplug capabilities which en hance system availability by allowing the insertion of PCI adapter cards without rebooting for server applications unlike prior PCI versions that were developed for general IO uses Vendors implement PCIX in various ways including in chipsets and network cards Compared to PCI PCIX offers better memory readwrite performance faster clocking and a wider bus which can move more bits at a time PCIX differs from PCI in that it uses registertoregister signaling which more accurately resolves signals This increases the speed at which the IO reads data and lets a bus have higher clock and data rates In newer versions of PCIX as well as PCIExpress the clock is embedded with the signal This makes the process more efficient and synchronization more accurate than clocking that comes from an outside source Currently PCI lets one 64bit bus run at 66 MHz and then lets either additional 32bit buses run at 66 MHz or additional 64bit buses run at 33 MHz yielding a maximum data rate of 532 Mbytes per second With PCIX133 one 64bit bus runs at 133 MHz and the rest run at 66 MHz allowing for a maximum data rate of 106 Gbytes per second For PCIX 20 the PCISIG is devel oping PCIX266 and PCIX533 yielding maximum data rates of 21 and 43 Gbytes per second respectively PCIX also offers advantages other than speed over earlier technologies For example the technology is more fault tolerant than PCI PCIX can reinitialize a faulty line card or take it offline before it fails PCIX is backward compatible with older PCI versions although it func tions only at the older technologies slower rates said Alan Goodrum an HP staff fellow Backward compatibil ity is one of PCIXs strong points he said Its investment protection PCIEXPRESS The PCISIG designed PCIExpress as a serial switched PCI technology Its lay ered architecture enables connections with copper optical or other media In addition Beaton noted PCI Express has a qualityofservice option that lets users assign different trans missions to various virtual channels giving priority to some of the channels Intel plans to release the first PCI Express chips and chipsets later this year HYPERTRANSPORT The HyperTransport Technology Consortium manages the Hyper Transport standard initially developed by AMD and the nowdefunct API Networks both semiconductor com panies HyperTransport shown in Figure 1 sends packetbased data and command information over fast unidirectional links Proponents say this has two main advantages over sharedbus tech nologies such as PCI and PCIX Uni directional links permit better signal integrity at high speeds and they enable faster data transfers with low power signals HyperTransport offers clock speeds of up to 800 MHz doubledatarate signaling and 32bit datatransfer technology The technology is thus quite fast with a theoretical maximum aggregate data rate of 128 Gbytes per second Because it is optimized for high speed data transfer HyperTransport is particularly good at connecting highspeed components such as pro cessors and closely coupled chipset elements Also HyperTransport links of dif ferent widths can connect For exam ple a 2bitwide link can connect to an 8bitwide link which lets users daisy chain parts of a system or application This allows companies to mix and match communications and embedded products for use in applications In addition HyperTransport pro vides scalable bandwidth where needed HyperTransport 110 slated for release in the near future will offer a series of features for networking appli cations said Brian Holden chair of the HyperTransport Technology Consor The new interconnect technologies offer more bandwidth and less latency tiums Technical Working Group These include the capacity to handle packets natively and network extensions that let HyperTransport bridge to other IO types RAPID IO Motorola and Mercury Computer Systems jointly developed Rapid IO between 1998 and 2000 for embedded systems primarily for the networking and communications markets The technology is typically implemented in processors controllers switches bridges fieldprogrammable gate arrays and applicationspecific inte grated circuits Several companies formed the RapidIO Trade Association in early 2001 to handle the standards devel opment maintenance and evolution according to the associations market ing chair Kalpesh Gala IBMs Power PC strategic marketing manager Explained association president Sam Fuller Rapid IO is a chiptochip and boardtoboard interconnect Rapid IO offers a switched archi tecture that increases data rates and reduces latency The technology also uses LVDS With Rapid IO all processing is done in a CPU or some other type of hardware This eliminates the need to write IO software run the software on a CPU which slows down the processor or spend extra money on a dedicated processor to run it This reduces latency And because Rapid IO runs only in hardware the technology unlike the other new local IOs operates with no impact on the OS is transparent to applications and thus doesnt need special device drivers This makes the system simpler and more efficient Rapid IO can bundle multiple dif ferential links into a single link for use in a task This lets the technology sup port bandwidths of up to 60 Gbits per second for each direction of a bundled bidirectional link Currently available systems based on Rapid IO already offer backplanes with aggregate bandwidth of over 480 Gbits per sec ond Fuller said A parallel version of Rapid IO pro vides the speed and low latency neces sary for highperformance chip and system connectivity The parallel approach is best for interconnectivity over short distances such as between modules and carrier boards There is also a serial version for such purposes as serial backplane commu nications and connectivity in digital signalprocessor farms explained Dan Bouvier architecture manager for the Motorola Somerset Design Center Serial Rapid IO is designed for appli cations that require longer transmis sion distances he said July 2003 15 Northbridge chipset Main processor Southbridge chipset PCI bus Control processor Proprietary control logic IO device IO device Proprietary link Bottleneck Bottleneck Memory Cache a Main processor HT to PCI bridge PCI bus Legacy PCI IO device HT IO device HT IO device HT IO device Legacy PCI IO device High bandwidth data pathway 8bit HT link HT ports Memory port Memory Cache High bandwidth data pathway b Processor bus Bottleneck Source HyperTransport Consortium Figure 1 a Traditional PC architecture has multiple layers of older PCI bus technology that can create datatransfer bottlenecks partic ularly as processor performance memory and other technologies have improved and as computers run moredemanding applications b HyperTransport HT targets bottlenecks by streamlining the interconnect structure and providing highspeed links The technology also works with legacy PCI buses 16 Computer E ach of the local IO technologies may survive by cultivating a mar ket niche I see them all being in the marketplace said analyst Jonathan Eunice with Illuminata a market research firm As long as they have a different value proposition they will survive For example he said Hyper Transports niche could be as the mez zanine bus for AMD systems PCIX on the other hand is popular in servers but hasnt been widely adopted for client PCs which dont need the higher bandwidth according to PCISIG Chair Pierce PCIExpress could be used in many scenarios but may not be supported in servers at least initially because of PCIXs popularity PCIExpress might INFINIBAND InfiniBand initially called System IO was born when two projects Future IO and Next Generation IO merged in 1999 Industry observers initially predicted InfiniBand would replace PCI particu larly when PCI was seen as a major sys tem bottleneck with few alternatives said Ramon Acosta chief technical offi cer for InfiniSwitch a networking hard ware and software vendor However said HPs Goodrum InfiniBand was optimized for use in net works and is unnecessarily complex for a local IO technology Proponents say it is better used as an external system interconnect particularly in data cen ters or between external networking devices such as those used for storage replace accelerated graphics port tech nology in graphics chips Meanwhile proponents are improv ing the new local IO technologies In the process users are trying to fig ure out whether they can afford the new technologies right away said Bert McComas founder and principal ana lyst for InQuest Market Research A lot of people are going to have to work hard to make this costeffective I Linda Dailey Paulson is a freelance writer based in Ventura California Contact her at ldpaulsonyahoocom I n d u s t r y T r e n d s Editor Lee Garber Computer 10662 Los Vaqueros Circle PO Box 3014 Los Alamitos CA 907201314 lgarbercomputerorg Course on ADVANCED DIGITAL SYSTEMS DESIGN October 610 2003 Organized by Centre for Advanced Digital Systems Swiss Federal Institute of Technology Lausanne Switzerland Lothar Thiele ETHZ Codesign and RealTime Scheduling Giovanni De Micheli Stanford Networks on Chip Highlevel Models and Languages Eduardo Sanchez EPFL Reconfigurable Computing Paolo Ienne EPFL VLIW Architectures Heinrich Meyr RWTH DSP Architectures for Communications Rainer Leupers RWTH Retargetable Compilers Peter Marwedel Uni Dortmund Energy Aware Compilation Enrico Macii Poli Torino Power Analysis and Low Power Design Luca Benini Uni Bologna System Level Power Optimization Contact paoloienneepflch Web httplapepflch advancedcourses Price EUR 1900 10 for IEEE Members Help Shape the IEEE Computer Society of tomorrow Vote for 2004 IEEE Computer Society officers Polls open 8 August 6 October wwwcomputerorgelection IEEE Spectrum Jan 2003 Unsnarling the Interconnect Tangle A variety of interconnects at the chip level board level and network level are being introduced at a bewildering pace We sort things out for you From chiptochip communications to data transfers between PCs and peripherals and among servers and storage devices new interconnects are taking advantage of the packetswitching technology that makes the Internet hum to provide gigabit speeds inside and outside the computer box Inside the box where signals from several devices have often shared the same parallel bus chiptochip and boardtoboard interconnects are using Internetlike connections that are packetbased and pointtopoint The goal is to boost IO performance so its on a par with the improvements seen in CPUs and memory Examples are Intels PCIExpress AMDs HyperTransport and Motorolas Rapid IO The new Intel PCIExpress formerly 3GIO bus which will crop up in Dell PCs starting this year to connect CPUs to peripherals serially It sends data packets one at a time from one point directly to another Timing information embedded in the packet is decoded and used to reassemble information at the end point Serial interconnects are less vulnerable than parallel links to cross talk and capacitance problems and are also cheaper to implement The venerable PCI for peripheral component interconnect and its successor PCIX are parallel interconnects which means that one clock signal handles multiple data lines That does less for speed than one might think because signals flowing across the bus at gigahertz rates are vulnerable to delay skew so that parallel bits arrive at different times and must be synchronized To get around this both HyperTransport and Rapid IO two next generation parallel interconnects use a source synchronous clock meaning that separate clock signals are sent on wires that run alongside data wires allowing higher clock speeds Packetswitched interconnects are also infiltrating network storage where the lowlatency highbandwidth Fibre Channel is the interconnect of choice The Fibre Channel serial data transfer architecture connects servers to storage devices via an arrayor fabricof optical fibers and switches that can carry data up to 10 km In contrast networks using packetswitching over transmission controlInternet protocol TCPIP can exploit the ability of Ethernet networks and the Internet to transfer data over arbitrarily long distances Moreover the highspeed continuous data that SCSI devices demand is being facilitated by recent improvements in IP networking and also by better buffer memories the first have made it faster and the second are smoothing packet transmission In fact the IPbased iSCSI SCSI over IP standard which should receive official approval as a standard from the Internet Engineering Task Force in February 2003 creates the option of building a cheap Ethernet based storage area network without Fibre Channeland Ethernet is the most widely deployed networking technology around Initially when iSCSI hits the market full force this year it will be deployed in remote mirroring where data is written to a local disk and a remote disk simulataneously and remote backup applications But it is expected to spread quickly particularly among small and mediumsized companies that have so far shied away from investing in a fullblown Fibre Channel SAN And theyll have plenty of equipment options to choose from with the likes of Adaptec Cisco Systems Emulex IBM McDATA and QLogic rolling out switches routers and adapters throughout the year In a typical iSCSI transaction the host server requests data from storage using the SCSI command protocol That command protocol is intercepted by a driver encoded into a packet and transmitted across the Internet andor Ethernet to a local or distant location There it is acted upon as if it were a local resource even though that local resource is somewhere across the Internet In response to the threat posed by iSCSI Fibre Channel acolytes have developed some IPbased systems themselves FCIP Fibre Channel over Internet protocol connects geographically scattered Fibre Channel SANs by tunneling through IP networks already in place IFCP Internet Fibre Channel protocol is a TCPIP based protocol for interconnecting Fibre Channel storage devices or SANs using an IP infrastructure in conjunction with or in place of Fibre Channel switching and routing elements According to Bert McComas founder and principal analyst of InQuest Market Research Higley Ariz iSCSI is a more practical application than rival technologies such as IFCP and FCIP which try to put the IP back in Fibre Channel The whole purpose of Fibre Channel was to get rid of the IP stack of layers says McComas who adds that Intels InfiniBand switch fabric is aimed at these same storage network markets as well as at data centers While InfiniBand has picked up some support from the likes of Sun Microsystems and others industry insiders say that its not yet ready for prime time especially considering the new infrastructure investments companies would have to make to incorporate the technology InfiniBand is a dream McComas quips but its not a dream come true Harry Goldstein Interconnect intricacies Dozens of kinds of interconnects hook up chips boards and servers and storage devices separated by meters or kilometers Here are a buzzworthy selected few Interconnect Devices Application Bandwidth Type PCI Chipchip expansion bus Internal bus PCs and peripherals 11 GBs Parallel shared bus globally clocked HyperTransp ort Chipchip PCs and embedded systems 400 MBs to 16 GBs Packet switch point to point source synchronous clock parallel Rapid IO Chipchip Telecom networking cell phone base stations 400 MBs to 8 GBs Packet switched pointtopoint parallel serial planned PCIX 20 Chipchip Internal bus PCs servers workstations and peripherals 2 to 4 GBs Parallel shared bus globally clocked PCI Express 3GIO Expansion bus chipchip PCs and servers 8 GBs Packet switched pointtopoint serial InfiniBand External backplane servers and storage Data centers and storage networks serverserver server storage server blade blade 251030 Gbs Packet switched pointtopoint serial Fibre Channel External backplane server storage Storage networks 2 Gbs scaling to 10 Gbs Serial embedded clock point topoint Fibre Channel over IP FCIP External backplane server storage Storage networks 1 Gbs Serial packet switched embedded clock point topoint iSCSI External backplane server storage Storage networks 1 Gbs scaling to 10 Gbs Serial packet switched embedded clock point topoint Gigabit Ethernet External backplane server storage Data centers and storage networks 110 Gbs Serial packet switched embedded clock point topoint 106 Computer S T A N D A R D S S witch fabrics are fundamental building blocks in a wide range of communication plat forms However despite the growing need for nextgener ation switches and routers semicon ductor vendors have been slow to develop switch fabric chipsets In addi tion to the many technical challenges associated with the deployment of such fabrics industry analysts agree that a key factor impeding widescale exploita tion is a lack of standardization in inter connecting fabric components Many of the more than 30 companies that develop switch fabrics offer excel lent price and performance but none guarantee compatibility with other ven dor offerings or even with future gen erations of their own products The market consists of several point solu tions with no reliable and coherent roadmap In these uncertain times assurance of supply is a major issue in selecting a silicon vendor Standard interfaces will make it possible to replace a discon tinued device without requiring a new system design The guaranteed avail ability of backup products will reduce the risks associated with each device and let systems designers select newer and more cuttingedge offerings SWITCH FABRICS A switch fabric moves incoming data from a set of ingress ports to a sin gle egress port in the case of unicast devices or multiple egress ports in the case of multicast devices In applica tions such as video switching the bind ing between an ingress and egress port changes infrequently In IP routers and asynchronous transfer mode switches however such fabrics dynamically par tition data into fixedsized or variable sized cells frames packets and other units Dynamic fabrics tend to be more complex than static fabrics because they require arbitration between data units that may be simultaneously des tined for the same output port Much switch fabric research in recent years has focused on Internet packet switching Due to advances in emerging networking applications however the need for highper formance switch fabric solutions has shifted from the pure IP domain Ro bust metropolitan area networks direc torclass storage area networks and other emerging switching applications have reached both the capacities and the service requirements to justify the need for advanced offtheshelf fabric products Figure 1 presents a generic overview of a contemporary highcapacity fabric architecture The ingress path connects a line cards network processing sub systemconsisting of network proces sors andor traffic managersto the switching core which dynamically connects ingress ports to egress ports The egress path aggregates traffic from the switching core and forwards it to the line card front end Switch fabrics can be implemented in various ways from sharedmemory architectures to fully distributed multi stage designs Regardless of the archi tecture however most are input bufferedthey queue incoming cells or packets at the ingress stage until a scheduling mechanism signals them to traverse the switch core In many implementations the buffering occurs at the line card and the switch cards contain little if any memory In addi tion to payload data control informa tion also flows from the line cards to the switch fabric COMMON SWITCH INTERFACE Many switch fabrics today support the common switch interface CSIX a standard developed by the Network Processing Forum wwwnpforum org formerly the CSIX Consortium The CFrame datagrams that traverse CSIX devices are fixed in size Each CFrame consists of a header which con tains all relevant management informa tion including routing and priority status and a payload In the segmenta tion and reassembly SAR process the switch segments packets arriving from various sources into CFrames and later reassembles them as they depart Switch Fabric Interfaces Itamar Elhanany University of Tennessee at Knoxville Kurt Busch TeraCross Derek Chiou Avici Systems Standardizing the interfaces connecting line cards with switch fabrics will facilitate innovation in communication systems September 2003 107 PCI ExAS The PCI Express physical layer wwwusdesignreusecomarticles article5306html is developed around a building block that includes two point topoint unidirectional paths each of which consists of a lowvoltage differ ential signaling LVDS pair operating at 25 GHz This yields an effective bandwidth of 2 Gbps in each direction or a full duplex bandwidth of 4 Gbps Lanes can be combined to provide even higher bandwidth as necessary The PCI ExAS architecture is at its root a serialized packetized version of PCI It incorporates features such as inherent multicast capabilities and error detection and correction at the protocol level to eliminate the need to implement them elsewhere PCI ExASs primary goal is to encap sulate all the information traversing between the different line cards and the fabric core over a standardized inter face that is defined both on the physi cal and logical layers Such unification will allow different vendors to add their variations while still adhering to an industry standard In addition PCI ExASs ability to handle packet traffic eliminates the need for an external SAR device thereby greatly reducing system cost Rapid IO The RapidIO initiative wwwrapidio org is attempting to address the same problem in similar ways Like PCI ExAS RapidIO is a layered architecture it also uses LVDS signaling with up to 1 GHz on both edges of the clock RapidIO includes specifications for both parallel and serial signaling and provides error recovery and reporting The serial interface particularly appeals to switch designers and uses 3125GHz signals with 8b10b en coding for an effective bandwidth of 25 Gbps per line Multiple lines can be used in parallel for greater band width per connection Unlike PCI ExAS Rapid IO is a more traditional cellbased architec ture that requires the use of an external SAR device Ethernet Some vendors are promoting stan A fabric interface chip FIC resides on each line card and interfaces to either a network processor or a traffic manager the fabric boundaries thus logically encompass part of the line cards While the CSIX standard is invalu able in guaranteeing interoperability between different networking devices namely switch fabrics and traffic managersnetwork processorsit in tentionally does not specify the inter face between the FIC and the fabric cards Consequently a network equip ment manufacturer that designs a switch fabric chipset is confined to the developers proprietary interface Changing interfaces would require redesigning highspeed boards that comprise the line cards and fabric an intricate task at best RECENT EFFORTS Researchers are exploring various ways to standardize the FICtofabric interface The three primary technolo gies currently driving the effort are PCI Express Advanced Switching PCI ExAS Rapid IO and Ethernet INGRESS EGRESS INGRESS EGRESS Switching core Line card 1 Switch fabric cards Scheduler Proprietary interface CSIX interface Control data Control data IO interface module IO interface module Packet traffic Fabric interface chip Network processor subsystem Media access control Line card N Proprietary interface CSIX interface Packet traffic Fabric interface chip Network processor subsystem Media access control Figure 1 Common switch fabric architecture The architecture allows multiple input ports to transmit data packets to multiple output ports simultaneously 108 Computer dard 1Gbps and 10Gbps Ethernet as an interconnection standard The clear advantage it offers is availability of a wide variety of inexpensive silicon buildingblock devices However be cause Ethernet was not inherently designed for this function it will likely be most useful in areas where cost rather than quality of service latency or any other performancecentric parameter is the dominant decision making factor A version of the CSIX streaming interface over highspeed serial links preferably 25 Gbps and above offers an alternative path toward efficient convergence of fabric related interconnect technologies Such an interface would allow using a CSIX based interface standard to connect FIC devices to fabric cards as well as to con nect traffic managers and network processors to the FIC In fact such an interface would let the FIC reside on the fabric card or even be integrated into the switch fabric I Itamar Elhanany is an assistant pro fessor in the Department of Electrical and Computer Engineering at the Uni versity of Tennessee at Knoxville Con tact him at itamarieeeorg Kurt Busch is vice president of mar keting at TeraCross Inc based in Campbell CA Contact him at kurt buschteracrosscom Derek Chiou is a principal engineer at Avici Systems Inc based in North Billerica Mass Contact him at dchiouavicicom S t a n d a r d s Editor Gary Robinson 85 East India Row Apt 18C Boston MA 02110 robinsongaryemccom To receive regular updates email dsonlinecomputerorg d s o n l i n e c o m p u t e r o r g IEEE Distributed Systems Online brings you peerreviewed features tutorials and expertmoderated pages covering a growing spectrum of important topics including Grid Computing Mobile and Wireless Distributed Agents Security Middleware and more IEEE Distributed Systems Online supplements the coverage in IEEE Internet Computing and IEEE Pervasive Computing Each monthly issue includes magazine content and issue addenda such as interviews and tutorial examples Tecnologias de Interconexão uma visão integrativa a partir dos trabalhos The Ins and Outs of New Local IO Trends Switch Fabric Interfaces e Unsnarling the Interconnect Tangle Insira neste campo seu nome completo sem parênteses1 O presente resumo tem como objetivo realizar uma síntese sobre os 3 trabalhos em destaque escritos na língua estrangeira inglês Para tal realizou se leitura e levantamento dos pontos principais de cada em um contexto das Tecnologias de Interconexão Podemos definir as Tecnologias de Interconexão como a conexão entre duas ou mais redes onde se faz preciso o uso de aparelhos como por exemplo roteadores ou hubs O primeiro trabalho The Ins and Outs of New Local IO Trends destaca que as novas tecnologias de interconexão são bastante importante para computadores processamento de mídia de streaming jogos processamento de segurança até mesmo verificação de vírus edição de vídeo voz reconhecimento criptografia avançada dentre outros Mas curiosamente as tecnologias locais padrão anteriores até PCI interconexão periférica de componentes não podem atender o desempenho necessário para oferecer a disponibilidade instantânea de dados que muitos sistemas precisam A partir disso surgiu a necessidade de um substituto ao PCI logo veio o InfiniBand sendo essa uma tecnologia de acesso remoto direto à memória Já o segundo trabalho Switch Fabric Interfaces aborda sobre essenciais blocos de construção em uma vasta variedade de plataformas de comunicação onde apesar da necessidade crescente de switches e roteadores de última geração com alta performance os fornecedores de semicondutores demoraram a desenvolver chipsets de switch fabric Nesse chamase atenção que o principal fator que desfavorece a exploração dessa em maior demanda é a insuficiente padronização na interconexão dos componentes da malha onde empresas não garantem a compatibilidade com as ofertas de demais fornecedores ou ainda com as futuras gerações de seus próprios produtos Por fim o último trabalho Unsnarling the Interconnect Tangle aborda a respeito da diversidade de interconexões à nível do chip da placa e da rede sendo apresentado em um ritmo desconcertante Nesse chamase atenção para o objetivo de aumentar o desempenho de ES para que fique no mesmo nível das melhorias vistas nas CPUs e na memória 1 Insira aqui sua titulação e filiação ex Bióloga pela USP