By 2015, Intel Corp. hopes that its early forays into multicore processors and virtualization technology will evolve to more sophisticated technologies that can deliver the levels of performance necessary to transform computing, Intel’s research and development chief said at the Spring Intel Developer Forum (IDF).
Justin Rattner, a senior fellow and director of Intel’s Corporate Technology group, wants future computers to be capable of interacting with people and to develop greater predictive capabilities.
“We want technology to become more natural, where we can have a conversation with a variety of information devices that populate our world,” Rattner said.
To make this vision work, Intel needs to develop technologies that will usher in the “era of tera” that Rattner’s predecessor Pat Gelsinger talked about at last year’s Spring IDF. Hardware developers will need to build systems capable of processing terabytes of data on chips generating teraflops (one trillion floating point operations per second) of activity to enable a world of intelligent computing, Gelsinger said last year. As part of Intel’s reorganization in January, Gelsinger now heads up Intel’s Digital Enterprise Group and long-time researcher Rattner is now in charge of technology development.
This year’s IDF has been stuffed full of dual-core processor briefings and explanations of new virtualization technologies built into processors and chipsets. In 2015 Intel wants to be capable of mass producing what Rattner called “many-core” processors with hundreds of processing cores, and to virtualize other parts of the computer such as graphics controllers or storage.
Intel is working on a new programming language called Baker that might help programmers take advantage of chips with hundreds of cores, Rattner said. It is testing the language on some of its networking processors that have multiple processing engines, and demonstrated how the programming language can sense changing workloads and allocate processing cores as needed, reducing the power consumption of unneeded cores.
Another demonstration showed how a PC could simultaneously run two different graphics-intensive workloads on a virtualized graphics controller. Each application believed it had full access to the graphics controller, and advances in hardware will allow for amazingly sharp graphics in three dimensions available for both applications, Rattner said.
Of course, all of these new technologies will require a steady supply of data from memory, or their full capabilities will never be reached. New packaging technologies such as stacked chips or stacked wafers would allow more data to flow back and forth between a CPU and a memory chip, Rattner said.
Intel’s recent work in developing silicon lasers could also pave the way for optical interconnects on chips. This is another example of how the transfer of data around a chip could near the speed of light, Rattner said.
Quick Link: 056610