News update as of February 2008:
We regret to announce that the chief architect for SCI, and a major contributor to all SCI-related and to many other standards, David V. James, passed away in January 2008.
News update as of February 2003:
The SCIzzL web server has been moved to a new machine, a Macintosh G4 running Apache, at a different location. It is serviced via a dynamic IP address via dyndns.org. Time will tell whether this will be satisfactory--it is far less expensive, at least!
News update as of September 2001:
At long last the IEC/IEEE publications deadlock has been resolved, and the corrected SCI spec has been published by the IEC as "ISO/IEC Standard 13961 IEEE". Unfortunately, the updated diskette was not incorporated. However, the updated C code is online, at sciCode.c text file (1114K). This release does not have the separate index file that shipped on the original diskette, because with the passage of time we lost the right of access to the particular software that generated that index. (People change employers.)
Unfortunately, the IEEE completely bungled the update, reprinting the old uncorrected standard with a new cover and a few cosmetic changes. Until this has been corrected, the IEEE spec should be avoided.
We now return you to the somewhat stale content of the remainder of this site:
My apologies to all who were offended by the spam that came through the SCIzzL reflectors May 4, 2000. The reflectors were set up to require the approval of a moderator (me), but suddenly they spontaneously started accepting everything without waiting for approval. We have not yet understood how this happened (the control table is still set up correctly!), so we have just shut down the system entirely. Probably we'll just leave the system turned off, since it takes a lot of time to keep a reflector system running, maintaining current address lists etc.
If it's any consolation, nobody was hit worse than I--a few hours into the failure and I've already received over 1400 emails, mostly from machines, but with a few important messages from humans mixed in. Past experience tells me that some such messages will continue straggling in for days.
We also had the reflectors shut down for several months around the turn of the century, due to server software changes, so we missed some important announcements some of you sent. In particular, I now see there was no notice about the new SCI book: SCI: Scalable Coherent Interface, Architecture and Software for High-Performance compute Clusters, compiled by Hellwagner and Reinefeld. I wrote the last chapter, a version of which you can see here: The Development and Adoption of SCI, and Future Directions.
SCIzzL is in hiatus. We lost all our corporate sponsors when SLDRAM became AMI2 and I decided to go back to SCI-related work instead. Even though most high-end computers now include SCI (or, in a couple of cases, "SCI-like"), their manufacturers are not planning to use it to connect to other manufacturers' products, and so they don't see any reason to promote SCI as the open vendor-independent standard it was designed to be. So, they don't see any need to support further standards development or a trade association, or SCIzzL.
And meanwhile the IEEE has itself adopted essentially the SLDRAM/SCIzzL model of standards development in its new ISTO program, so SCIzzL now finds itself competing with the IEEE while trying to support the development of IEEE standards, a difficult contradiction.
So for now I'm trying to keep SCIzzL alive (at a low level) while looking for new opportunities. I continue to work on Serial Plus development, hoping that we can find enough support and the right market opportunity to make it succeed as a true interchange standard. It has all the virtues of SCI and several new ones, which will make it attractive for multimedia and realtime use. Financially there's no problem--there's more consulting work than I can handle, but there isn't much time available for maintaining SCIzzL infrastructure. I've also reduced my IEEE standards involvement significantly (the new Microprocessor Standards Committee Chairman is Don Wright, of Lexmark). I still hope to see the corrected SCI spec published by the IEC and reaffirmed by the IEEE, but I finished the edits two years ago and it still seems to be stuck in some kind of deadlock between those two organizations.
There do seem to be a lot of people who think SCI is dead, yet it's outlasted all its competitors and is nearly ubiquitous in high end servers--just under other names. Perception isn't always reality.
The remainder of this site is stale, but I can't take time right now to bring it up to date safely (without breaking historical links etc.). Sorry, use with care.
For SCI Europe '99 presentation, click here.
For SerialPlus info, click here.
The MicroProcessor Standards Committee info is now maintained at the IEEE web site.
SCI has been an open, public, officially approved ANSI/IEEE Standard interconnect since 1992, and still there is no other open standard that approaches SCI's performance: bandwidth, latency, simplicity, freedom from arbitrary limits, cost-effectiveness.
Sun's server machines now all support high performance SCI interconnection. Sun seems to have found a way to avoid the PCI deadlocks that have limited SCI interface performance in many other systems. (The PCI architecture was not designed for high performance bridging, and its problems show up quickly when connecting to a much faster interface, like SCI, pushes PCI to its limits. The fundamental PCI flaws can be avoided by careful bridge design, if you allow only one device per PCI bridge. I think this must be the approach Sun uses, with one PCI bridge dedicated to a PCI/SCI interface board.)
DG has announced commercial availability of its SCI-based machines! See their NUMALiiNE web pages. For details about their SCI-based NUMALiiNE interconnect, see their SCI Chipset paper.
Khan Kibria of ISS gave an electrifying presentation at SCIzzL-7 and an update at SCIzzL-8 in December that included showing real silicon (now tested and working perfectly), revealing more about product plans and schedule. Very exciting! ISS is committed to building a market based on interface standards, exactly what the SCI family of standards was intended for. ISS has just launched a new Web page, take a look!
Sequent has announced commercial availability of its SCI-based machines! See their press release. For more details about how SCI is used for their IQ-Link interconnect, see their white papers.
For MicroProcessor Standards Committee info, click here.
For P1285 info, click here.
For Serial Express P2100 (formerly P1394.2) info, click here.
For KiloProcessor Extensions P1596.2 latest draft, click here.
For Parallel Links for SCI P1596.8 info, click here.
For Physical Layer API for SCI P1596.9 info, click here.
For Extended rugged PCI packetbus P1996 info, click here.
For Control & Status Register Architecture revision P1212 info, click here.
May, 2000: apologies for spam problem, some general cleanup/updates.
August, 1999: August SCIzzL updates, SCIzzL-10-11 proceedings online.
July 30, 1999: August SCIzzL meetings, SCIzzL-12 workshop.
March 8, 1999: FutureIO, MSC meeting, March SCIzzL meetings, SCIzzL-11 workshop.
October 8, 1998: MSC meeting, November SCIzzL meetings, SCIzzL-11 workshop.
September 1, 1998: SCIzzL-10 workshop, SGI argues the case for SCI.
March 11, 1998: SCIzzL-9 workshop, SLDRAM has spun off, misc corrections.
October 14, 1997: Dolphin joins SCIzzL, Rescheduling SCIzzL-8 workshop, DG uses SCI in new machines
April 24, 1997: point to May meetings announcement; October, added pointers to November meeting schedule and info, added pointers to Wescon presentations
Oct 18, 1996: added pointer to HPCwire Interview of Gustavson on SCI
The Scalable Coherent Interface (Local Area MultiProcessor) is effectively a combination computer backplane bus, processor memory bus, I/O bus, high performance switch, packet switch, ring, mesh, local area network, optical network, parallel bus, serial bus, information sharing and information communication system that provides distributed directory based cache coherency for a global shared memory model and uses electrical or fiber optic point-to-point unidirectional cables of various widths. Typical performance is currently in the range of 200 MByte/s/processor (CMOS) to 1000 MByte/s/processor (BiCMOS) over distances of tens of meters for electrical cables and kilometers for serial fibers. SCI/LAMP was designed to be interfaceable to common buses such as PCI, VME, Futurebus, Fastbus, etc., and to I/O connections such as ATM or FibreChannel. It was designed to work in complex multivendor systems that grow incrementally, a harder problem than interconnecting processors inside a single product (e.g. MPP). Its cache coherence scheme is comprehensive and robust, independent of the interconnect type or configuration, and can be handled entirely in hardware, providing distributed shared memory with transparent caching that improves performance by hiding the cost of remote data access, and eliminates the need for costly software cache management.
SCI began in 1988 in the IEEE P896 Futurebus standards project, when some realized that not even the best bus design would be able to support the needs of microprocessor multiprocessors within a few years. In order to scale the performance up to far higher levels, SCI abandoned the bus-specific parts of the Futurebus protocols that could not be scaled (such as non-split read transactions, which hold the bus from initiation until completion, and bus-snooping cache coherence), but kept the general architecture as far as possible so as to be easy to use for interconnecting Futurebus subsystems, and added some features to make it easier to interface to other common buses like the VME bus and the PCI bus.
SCI was completed in 1991, and became an approved ANSI/IEEE standard in 1992. It is currently being designed into at least half a dozen commercial computers, and is already shipping inside the Hewlett-Packard/Convex Exemplar global-shared-memory PA/RISC-based multiprocessor supercomputer (about 90 in the field as of May 1995).
SCI reduces the delay of interprocessor communication by an enormous factor compared to even the newest and best interconnect technologies that are based on the previous generation of networking and I/O channel protocols (FibreChannel and ATM), because SCI eliminates the need for run-time layers of software protocol-paradigm translation. A remote communication in SCI takes place as just a part of a simple load or store opcode execution in a processor. Typically the remote address results in a cache miss, which causes the cache controller to address remote memory via SCI to get the data, and within on the order of a microsecond the remote data are fetched to cache and the processor continues execution.
The old approach, moving data through I/O-channel or network-style paths, requires assembling an appropriate communication packet in software, pointing the interface hardware at it, and initiating the I/O operation, usually by calling a subroutine. When the data arrive at the destination, hardware stores them in a memory buffer and alerts the processor by an interrupt when a packet is complete or the buffers are full. Software then moves the data to a waiting user buffer (sometimes this move can be avoided, in the latest systems), and finally the user application examines the packet to find the desired data. Typically this process results in latencies that are tens to thousands of times slower than SCI. These latencies are the main limitation on the performance of Clusters or Networks of Workstations. "Active Messages" promise to reduce these overheads considerably, but still will be much slower than SCI. To make program porting easy, the old protocols can be layered on top of SCI transparently. Of course, such an implementation only gains SCI's raw speed factor: to get SCI's full potential speedup, applications will need to eliminate the protocol overheads by using global shared memory directly.
Another advantage SCI has is the simplicity of its RISC-style protocols, which allow SCI links to deliver much higher performance for any given chip technology. For example, IBM presented a single-chip FibreChannel interface at ISSCC-95, which will run at 1.06 Gbit/s (100 MByte/s after decoding), but IBM demonstrated a single-chip SCI interface (transceivers and all) in mid-94 running at 1 GByte/s, 10 times as fast. High performance ATM links today are 155 Mbit/s, one 64th the speed of SCI, but are expected soon to reach 622 Mbit/s, a sixteenth of the IBM chip's standard SCI link speed.
For applications that don't need SCI's full speed, the standard protocols also support a ring-style connection that shares the link bandwidth among some number of devices, avoiding the cost of a switch. Either an individual SCI device or an entire ring can be connected to an SCI switch port, giving a user the ability to trade off cost versus performance over a broad range. Rings can also be bridged to other rings to form meshes and higher-dimensionality fabrics.
For information about SCI Europe '99
For information about SCI Europe 2k
PDF (216k) Invited talk at SCI Europe 1999, color for onscreen viewing
PDF (84k) black/white for full-size printing or
PDF (64k) 4-up black/white for handout-style printing or
HPCwire Interview: Gustavson on SCI (October 1996)
Loebel's responses to HPCwire questions
What is the Scalable Coherent Interface?
Compare SCI and ATM, FibreChannel, HIPPI, Serialbus, SerialExpress, SuperHIPPI
Gustavson's comments on the SGI Hot Chips 98 I/O "tutorial" that argues strongly for using SCI
No future meetings are currently scheduled (as of May 2000)
Info about the SCIzzL-12 workshop and August '99 SCI-related Meetings at Santa Clara University is available as PDF (43k) or PS (215k).
August 24, 1999 SCI ccNUMA Tutorial (html)
August 1999 SCIzzL-12 Preliminary Program (html)
Registration form for SCIzzL-12 online or in paper form as PDF (7k), and as PostScript, (57k).
*** Schedule for Aug 21-24, 1999, at Santa Clara Univ.: | |||||
Time | Fri 20 Aug | Sat 21 Aug | Sun 22 Aug | Mon 23 Aug | Tue 24 Aug |
0900 | Hot | SCIzzL-12 | P2100 | P1996 | SCI/Ser+ |
0930 | Interconnects | Workshop | Serial | HiRel | Intro |
1000 | Tutorial | Plus | PCI/Ser+ | Tutorial(DBG) | |
1030 | Day | -Break- | -Break- | -Break- | -Break- |
1100 | SCIzzL-12 | P2100 | P1996 | SCI/Ser+ | |
1130 | Workshop | Serial | HiRel | Intro | |
1200 | Plus | PCI/Ser+ | -Lunch- | ||
1230 | -Lunch- | -Lunch- | -Lunch- | -Lunch- | |
1300 | -Lunch- | -Lunch- | -Lunch- | ccNUMA | |
1330 | SCIzzL-12 | P1596.2 | P1537 | using SCI | |
1400 | Workshop | KiloProcessor | Electronic | Tutorial | |
1430 | Extensions | DataSheet std | (Loebel) | ||
1500 | -Break- | -Break- | -Break- | -Break- | |
1530 | SCIzzL-12 | P1596.2 | P1596.10 | ccNUMA | |
1600 | Workshop | KiloProcessor | SCI chip | Tutorial | |
1630 | Extensions | BackEnd bus | |||
(as needed) |
Info about the SCIzzL-11 workshop and March '99 SCI-related Meetings at Santa Clara University is available as PDF (44k), and as PostScript, (216k).
Info about the SCIzzL-10 workshop and Sept '98 SCI-related Meetings at Santa Clara University is available as PDF (43k), and as PostScript, (163k).
Info about the SCIzzL-9 workshop and March '98 SCI-related Meetings at Santa Clara University is available as PDF (90k).
Info about the SCIzzL-8 workshop and December '97 SCI-related Meetings at Santa Clara University is available as PDF (140k), and as PostScript, (254k).
Info about the May '97 SCI-related Meetings at Santa Clara University is available as PDF (32k), and as PostScript, (163k).
Info about the SCIzzL-7 workshop and March '97 SCI-related Meetings at Santa Clara University is available as PDF (62k), and as PostScript, (169k).
March 1997 SCIzzL-7 Program (html)
March 27, 1997 SCI C-Code Tutorial (html)
Info about the November '96 SCI-related Meetings at Santa Clara University is available as PDF (64k), and as PostScript, (147k).
Info about the SCIzzL-6 workshop and September '96 SCI-related Meetings at Santa Clara University is available as PDF (76k), and as PostScript, (197k).
September 1996 SCIzzL-6 Program (html)
September 23, 1996 SCI C-Code Tutorial (html)
(We hope eventually to make them all accessible online.)
Tables of Contents in PDF:
SCIzzL-1 Table of Contents, PDF
SCIzzL-2 Table of Contents, PDF
SCIzzL-3/4 Table of Contents, PDF
SCIzzL-5 Table of Contents, PDF
SCIzzL-6 Table of Contents, PDF
SCIzzL-7 Table of Contents, PDF
SCIzzL-8-9 Table of Contents, PDF
Roy Clark, Data General: SCI Interconnect Chipset and Adapter: Building Large Scale Enterprise Servers with Pentium Pro SHV Nodes (html)
Khan Kibria, Interconnect Systems Solution: Lightweight SCI, the low cost interconnect solution (pdf) and Presentation slides (pdf)
Mitchell Loebel: presentation slides (pdf)
David Gustavson & Qiang Li, Santa Clara University: Low Latency, High Bandwidth, and Low Cost for Local-Area MultiProcessors (pdf)
A campus map is at URL: http://sunrise.scu.edu/scu-map.html
We have several email reflectors: (These are at least temporarily all shut down as of May 2000.) (To send a message to one of these lists, simply address your email to it. Postings may be delayed by the need for intervention by a human spam filter (moderator).) sci_announce@sunrise.scu.edu for general-interest announcements, moderated by dbg, about 1/month. sci@sunrise.scu.edu for technical discussions, no limits. sci_rt@sunrise.scu.edu for discussions about SCI RealTime SLDRAM@sunrise.scu.edu for discussions about the several GByte/s memory interface standard, P1596.7 sci_c@sunrise.scu.edu for C-code announcements, general comments. sci_code_bugs@sunrise.scu.edu for reporting C-code bugs. P1212@sunrise.scu.edu for Control & Status Register Architecture revision P2100@sunrise.scu.edu for Serial Plus P1996@sunrise.scu.edu for Rugged Extendable PCI To join a reflector: Use the web or send your name, phone, fax, postal mail address, and email information to Dave Gustavson. Or, (even better) join SCIzzL as an individual member by filling out the SCIzzL membership form using your Visa or Mastercard.
Look here for some additional info from SCU.
CERN High Speed Interconnects web page
University of Oslo SCI web page
Any questions or bug reports regarding these documents or related to SCI should go to dbg@SCIzzL.com
SCLzzL is a non-profit organization supported by the SLDRAM Consortium member companies and by the SCI industry. SCIzzL is grateful to Santa Clara University for providing facilities, and to Hewlett Packard Corporation for providing the ftp server we used from 1988 through April 1995.