Transparency of complexity illusion revealed
Anyone working in the electronics industry will appreciate that every generation of integrated technology successfully delivers a greater level of functionality, which in turn enables end-products to appear simpler, do more or perform better.
This , of course, and one that relies on levels of abstraction; creating a hierarchy of complexity which results in devices that deliver greater functionality in ever-smaller form-factors. That hierarchy transcends physical boundaries and is increasingly seamless across real and virtual networks.
What is increasingly appreciable, however, is that the illusion of simplicity is reaching an unsustainable level; as we demand greater intelligence from our digital devices these networks must, seamlessly, process an exponentially greater level of information. Sustaining this complexity in an evermore complex hierarchy means that a revolutionary and not evolutionary solution must be sought.
System-level design remains at the heart of making the complexities of technology transparent and it is now imperative that systems assume even more of the complexity burden. The implicit problem is that the more intelligent a system becomes, the harder it becomes to maintain and increase performance.
The solution to this is to make systems ‘smarter’, affording them greater autonomy in order to deliver greater functionality; achieve more with less. This change could be seen as analogous to the Industrial Revolution, where steam power and automation gave birth to the factory, the production line and ultimately the modern world.
Although making systems smarter will require even greater complexity in terms of integration, simply creating larger integrated devices isn’t the solution. Multicore processors and massively parallel programming delivers benefits up to a point, beyond that the underlying architecture needs to become a combination of tightly integrated functional blocks able to create something that is greater than the sum of the parts.
By definition, one size will not fit all; smarter systems will be optimised for specific applications where they can add real and measurable value, such as networking, vision and control. Line-speed processing of wide packets is becoming a must-have feature for intelligent systems and creating this will require a flexible platform, if it is to address the needs of all end-applications.
As computer/machine vision becomes more deeply embedded, smarter vision systems will herald in a new age of autonomy, for manufacturing, mass and personal transport, as well healthcare. However the rising complexity here will be evidenced in matching the amount of processing power needed to handle the sophisticated algorithms necessary to analyse the data, with the line-speed transfer of increasingly large amounts of real-time streaming, digitised video.
This ‘Big Data’ phenomenon also reflects the challenge of meeting the unrelenting demand for more bandwidth, which underpins the need for smarter networks. OEMs are already deploying systems based on 100Gbit Ethernet, with many developing 400Gbps and Terabit solutions. These incredibly high bandwidth levels simply can’t be met with yesterday’s approaches; smarter network solutions capable of high speed deep packet inspection are the only way to meet the demand for more bandwidth coupled with the level of data analysis necessary to make intelligent use of that bandwidth.
Abstracting away the rising complexity of smart systems will demand close integration and a design approach that supports hierarchical development without imposing performance limitations. Very few technologies can offer this exacting combination of flexibility, performance and integration.
As smart systems propagate, the illusion of simplicity and the abstraction of complexity will be maintained, but it will demand a revolutionary change to system design that can only be delivered through the right platform and a smarter approach.
Giles Peckham, EMEA marketing director at Xilinx