Q5 interview – Ian Bell, National Instruments

Ian Bell, technical marketing manager at National Instruments, talks to Electronics Weekly about how graphical test and development tools can address the challenges of programming multicore processor systems.


What is the biggest challenge for computing system developers in the move to parallel systems and multicore processors?

For even experienced programmers, using sequential, text-based tools, the biggest challenge is learning how to split up their applications into parallel sections, or threads, that they can balance across multiple processing cores in their systems.

Multi-threaded programming with traditional text-based tools is a very complex task, mainly because this sequential model of programming is not suited to representing parallel code.

A graphical, block diagram approach to programming is naturally parallel, so it encourages programmers to expose the inherent parallelism in their application. 

What are the new programming methods and tools, and application models needed to make this happen?
Programming today’s dual and quad-core systems using traditional text-based tools is a serious challenge even for experienced professional programmers. For more mainstream programmers, like engineers and scientists, it is an almost impossible task.

With tens or hundreds of cores, all programmers, mainstream or experienced professionals, will need to adopt new ways of developing systems. The block diagram approach to programming used in National Instruments LabVIEW was originally designed for more mainstream developers, like engineers and scientists. By designing their solutions as block diagrams, they naturally develop parallel programs.

The NI LabVIEW compiler automatically breaks up these parallel programs into multiple threads for the user and passes these threads to the operating system for assignment to multiple processing cores. Using this inherently parallel, graphical programming language at the core, the graphical system design platform also includes higher level models of computation such as statecharts and simulation diagrams which along with multicore-optimised analysis and signal processing IP completes a very powerful approach to programming multicore systems.

Using this approach, engineers and scientists can focus on their solutions without getting bogged down in the details of multithreaded programming and still gain a performance advantage from the latest PC technology.

Will the move to parallel systems in the PC and embedded worlds have an impact on test architectures, creating the potential for parallel test?

The traditional approach to functional test using sequential programming languages and traditional benchtop instruments has conditioned test engineers to think about their test processes in a very sequential manner. It has driven design engineers to evolve test methodologies for their products that fit into this sequential world view.

Parallel test system architectures, multicore processors, high-performance buses and the shift towards software-defined test systems are all helping to drive changes in the way products and systems are tested. This is also driving changes in how a product is designed for test.
For example, to accommodate new test techniques, design engineers will now have to include parallel test modes in their product, where sequential test modes were the norm in the past. 

Will this require new test and development tools?

Many of the tools already exist. However, another tool now available is the FPGA. Traditionally the preserve of the digital design engineer, the latest FPGA-enabled test devices are programmed and configured to allow test engineers to build in parallelism at the hardware level.

How will a graphical programming tool, such as LabVIEW, evolve in the coming years as a result of the impact of parallel computing?   

There are a number of technologies that look interesting in this area. Two technologies that are quite different in nature but equally disruptive are virtualisation and many-core. 

Virtualisation technology enables multiple operating systems to run simultaneously on the same processor by partitioning resources. It plays well with multicore because cores can be dedicated to a particular operating system. The benefits to the end-user are numerous: platform consolidation, maintain multiple operating systems on a single system and GUI/RTOS separation for more embedded applications.
Many-core is an extension of the multicore concept but to hundreds or even thousands of cores. At the moment, no software exists that can take advantage of many-core and this is a longer-term technology that will take some years to figure out.

It will be a considerable challenge for the software community to develop techniques to utilise this technology, but it is clear that the higher level of abstraction offered by graphical programming will play a significant role.

Once we figure it out, the computational boundaries will be drastically shifted and the technical community will be able to solve grand challenges that were once impossible.

See also: Q5 – Interviews with electronics industry leaders
Read all the Electronics Weekly Q5 interviews. From ARM’s chairman, Sir Robin Saxby, to touchscreen technology firm Zytronic’s MD, Mark Cambridge, the business leaders share their particular insights on the UK electronics industry.

Leave a Reply

Your email address will not be published. Required fields are marked *