You are on page 1of 9

The Challenges of Modern FPGA Design Verification

by Kevin Morris
Fifteen years ago verification of FPGA designs was easy: you only needed a
decent gate-level simulator to verify a circuit containing several thousands of
logic elements. As the size of FPGAs started to grow, so did the complexity of the
designs implemented in them.

Over time, hardware description languages sneaked into schematic designs and
eventually replaced schematic entry.

Today, it is quite common that FPGA users have to deal with more than one
language in their designs (e.g. original sources in VHDL with some IP core in
Verilog). At earlier stages of the design development it may be necessary to
interface HDL simulation with environments using Domain Specific Languages,
such as Matlab. To speed up testbench simulations, patches written in C/C++ are
used frequently. Sometimes, when simulation is still too slow, hardware
acceleration may be necessary. In the last two years embedded systems found
their way into the FPGA domain, adding one more headache – how to test both
system software and system hardware in the simulation environment not
prepared for this task.

In this article we analyze sample solutions to the problems mentioned above that
make the life of a modern FPGA system designer much easier.

The history of FPGA design verification


When Xilinx released the first FPGA in 1985, the XC2064 chip and its 1,000 gate
size seemed very impressive. No one probably predicted that by the year 2004
the size of an FPGA would be 10,000 times larger…

As long as design size remained within the range of several thousand gates,
schematic design entry and a good gate-level simulator were enough to create
and verify the entire design. Hardware descriptions languages started to sneak
into schematic designs in the shape of HDL macros, and as designs migrated
into tens of thousands of gates they gained importance. By the time FPGAs
crossed the100,000 gate threshold it became obvious that HDLs would have to
eliminate schematic entry and gate-level simulators. The two most important
factors were:

-The impossibility to manage all-schematic designs at this level of complication,


– The necessity to synthesize HDL macros before gate-level simulation.
Although HDL simulators were available since the late 80’s, lack of efficient HDL
synthesis tools prevented wider application of an HDL-only FPGA design flow.
When the speed of HDL simulations started to approach the speed of gate-level
simulations, synthesizers became more efficient and schematic tools turned into
block diagram editors able to generate HDL netlists, it was time to switch the
entire FPGA design flow to HDLs.

Of course it was just the first step: VHDL and Verilog were quickly joined with
traditional programming languages (C/C++) and domain-specific languages
(Matlab). In the following sections we demonstrate how an FPGA designer can
deal with challenges created by this diversity.

Mixed language designs


VHDL was the first hardware description language that gained popularity in the
FPGA design world. When the size of FPGAs started to grow, Verilog solution
providers working mainly in the ASIC domain realized the opportunity to enter the
FPGA market. Right now both VHDL and Verilog are used in large FPGA
designs. If the design is created from scratch, it may be possible to handle it
entirely in one description language. If legacy code must be re-used, or if IP
cores have to be incorporated, we may end up with a mixed-language design.
While peaceful co-existence of VHDL and Verilog in the design description was
never a problem, efficient simulation of mixed-language designs is a challenge.

The first HDL simulators were usually dealing with one language only. When two
languages had to be handled in one design, co-simulation using both VHDL and
Verilog simulators was the obvious solution. Please note that the Unix platform
seemed better suited for this solution than Windows.

Frequent data exchange between separate simulation engines may have a


negative effect on the performance of the entire design simulation. That’s the
main reason why single kernel simulators are now the most popular verification
tools.

Although there are differences in scheduling mechanisms used in Verilog and


VHDL simulations, similarities prevail. It is possible to create one simulation
engine (kernel) that meets the requirements of both hardware description
languages. When paired with matching compilers and elaborators, a single kernel
simulator creates the optimal environment for verification of mixed language
designs. Benefits are obvious for both designers and simulation tool vendors:

-The use of one simulation engine means that designers don’t have to fight with
configuring multiple tools to co-simulate properly.
-The growing size of designs creates the pressure to increase speed and reduce
resource usage during simulation; a single kernel makes any kind of simulation
optimization easier than separate kernels.
-A single kernel simulator can be easily turned into a VHDL-only or Verilog-only
simulator via licensing options, eliminating the need of maintaining multiple tools
by the software vendor.
Single kernel simulators supporting Verilog and VHDL are very popular and
should be the first choice for anybody working in a mixed-language environment.
Some tools can handle even more description formats in one kernel, e.g. EDIF
netlist.

Programming Language Interface in HDL simulation


Large FPGA designs usually require advanced verification algorithms. Some of
those algorithms, even if they can be implemented in VHDL or Verilog, are not
simulating efficiently in the HDL environment. That’s why modern simulators
enable the interface with routines written in traditional programming languages –
mainly C and C++. Typical applications of such interface include:

-Encoding functions without native support in HDLs (e.g. trigonometric functions


in Verilog).
-Accessing functions of the operating system.
-Accessing hardware devices (logic analyzers, data collection units, etc.)
Since its very beginning, VHDL provided open access to programming language
routines via foreign architectures and subprograms. This approach enables a
very efficient connection between the simulator and user-written routines, but
requires excellent knowledge of the simulator’s application program interface
(API). Even if developers have no problems with the use of a given simulator’s
API, the chances are that whatever works now will not be portable to other
simulation platforms.

Verilog used a slightly different approach: its standard contains a description of


the C language procedural interface, better known as programming language
interface (PLI). We can treat PLI as a standardized simulator API for routines
written in C or C++. Most recent extensions to PLI are known as Verilog
procedural interface (VPI); the solution enabling a similar interface between
VHDL and C/C++ is in the final stage of development and is called VHPI (VHDL
Procedural Interface).

PLI and VHPI give design and verification engineers developing C/C++ routines a
mechanism that shields them from the low-level details of their simulator
operations that are irrelevant to the verified design functionality. Since PLI (or
VHPI) is standardized, both C code and matching PLI calls should be much
easier to port between different simulation platforms. But one non-standard area
still remains: the connection of the PLI (VHPI) engine with the simulation kernel.
Procedures involved here vary dramatically between different simulators and may
look like black magic to designers that are not professional C/C++ programmers.

Fortunately a little bit of good will shown by a simulator vendor can eliminate this
last hurdle. A small applet or wizard (like the one shown in Figure 1) should be
able to create low-level interface files.

Figure 1. Sample VHPI/PLI Wizard for HDL simulator environment.


A user preparing his or her C code for connection with the simulator has to fill in
several simple fields related only to the C code being connected and PLI/VHPI
routines that have to be used. After completion of the wizard, two .cpp files are
created. One contains all low-level routines required to connect the simulator with
the PLI/VHPI engine and does not have to be modified by the user. The other
contains placeholders for both pure C functions and related PLI/VHPI routines.

After entering his or her code, the user compiles and links both files, receiving a
dynamically linked library that can be used during simulation.
Co-simulation with Domain Specific Languages
Quite frequently HDLs are not the best choice to start the description of a digital
system. If the design has to implement advanced mathematical operations,
Matlab is a very convenient environment for quick verification of ideas. For many
DSP designs using algorithms published in C, a toolset similar to Celoxica’s DK2
with Handel-C support will be the best choice.

In both cases we are dealing with domain specific languages used for the
description of the design. Once the initial description is verified in its native form,
the designer faces the task of implementing that description in hardware. Some
solutions translating domain specific language descriptions directly into the
vendor-specific netlist may exist, but the traditional approach involves a gradual
translation of original files to HDLs, and then continuing the implementation in the
classic FPGA design flow.

The key issue here is maintaining the design integrity during DSL to HDL
translation. One simulator alone will not be helpful; but working together during
HDL/DSL co-simulation they can be very useful.

Of course various co-simulation solutions exist, but the user effort required to
make them work in any particular design may be discouraging to the designers.

Let’s consider the case of converting one block of Matlab description of the
design to a VHDL design unit.

Once the designer has the VHDL model with functionality (supposedly) identical
to the original Matlab description, they need to create an interface between the
data systems of both environments. For every port of a VHDL entity they have to
specify at least typecast (pair of matching data types in VHDL and Matlab). If the
port happens to be a vector, there are several additional tasks: specification of
the number of bits in the integer and fractional part, quantization method and
overflow handling mechanism. Then there is the important task of convincing
Matlab’s Simulink that the VHDL descriptions are ready for co-simulation.

Fortunately Matlab provides a convenient black-box mechanism; the designer


just has to know how to create the black-box, or the set of black-boxes for the
entire design.

Once the black-box is created, and before starting co-simulation in Simulink, the
designer may have to adjust some additional parameters, such as a sampling
period. When co-simulation is running, Scope from the Matlab environment can
be used to visualize native Matlab signals and ports of the black-boxes; to
observe internal black-boxes signals, it is necessary to use an HDL simulator.

Please note that Matlab provides an open method of adding blocksets for co-
simulation; the actual blockset creation is the task of the user connecting his or
her simulator. Good HDL simulators should provide an automated method of
generating Simulink blocksets. In the Windows environment they will usually take
the shape of Wizards. Figure 2 presents a sample solution: a wizard started for
each HDL module that should have its black-box for co-simulation. Upon
completion of all wizard sessions specifying common output directory, a Simulink
blockset is created automatically in the specified location.
Figure 2. Sample Matlab (Simulink) Co-simulation Wizard
Speeding-up simulations

As the size of FPGA design grows, the decrease of pure HDL simulation
performance becomes noticeable. When verification procedures take hours to
execute, it is time to think about hardware acceleration.

ASIC designers were implementing hardware acceleration of HDL simulation for


some time before FPGA designers were forced to follow their steps. Please note
one very important difference between accelerating ASIC and FPGA simulations:
while there is no target silicon available yet when an ASIC design is verified, the
FPGA designer has access to the target silicon all the time – it just requires
programming. Consequently, ASIC designers have to use costly emulators, but
FPGA designers will get similar results using a good prototyping board.

FPGA designers can use two popular methodologies of speeding-up their


simulations:

-EMULATION assumes that the entire design is synthesized and implemented,


then pushed into the FPGA on the hardware board connected to the computer
where the HDL simulator is installed. During verification, the HDL simulator
provides stimulus for the design pushed into hardware, reads design response
and processes received data. While this methodology assures maximal
verification speeds permitted by the hardware board and its interface with the
computer, visibility of the design may be insufficient for more advanced
debugging. Of course an embedded logic analyzer or similar solution can be
added to the design before it is implemented and pushed to the hardware board,
but the necessity to modify the design only to increase visibility during verification
may be hard to accept.

-ACCELERATION assumes that only a part of the design is pushed into the
hardware; the rest is kept in the HDL simulator environment and co-simulates
with the hardware part. Very efficient communication protocols between the
board and the simulation kernel are required to maintain a significant increase of
verification speed in this methodology, but even if they are available, the wrong
selection of design modules pushed into hardware may nullify gains introduced
by acceleration. That’s why profiling of the design being verified is essential:
modules occupying a significant portion of simulation time should be pushed into
hardware first.
Although acceleration is slower than emulation, it is easier to implement when a
high level of design visibility is required during verification. FPGA designs with a
high percentage of original HDL code will probably benefit more from
acceleration. Designs with heavy use of IP cores and previously created and
verified modules may only need emulation. Design houses working on a diverse
range of projects will see an equal demand for emulation and acceleration.
Challenges introduced by FPGA-based embedded
systems
For a long period of time changes in the methods of creating and verifying FPGA
designs were more evolutionary than revolutionary. But once the size of FPGAs
became large enough to place the entire microprocessor inside (with enough
room left for some peripherals), revolution had to come.

The nature of System On Chip (SOC) is dramatically different from traditional,


hardware-only FPGA design: system software running on the embedded
microprocessor is the integral part of the system, not just the way of designing it.
Traditional flows used in FPGA development or software development always
leave some part of the SOC not verified. Of course it is possible to develop
system hardware and system software independently, verifying the entire system
after the prototyping stage has been reached.

This approach has several important flaws:

-Design verification cycle is longer (each detected error on the hardware side
requires re-creation of the prototype)
-Visibility of the design during verification may be insufficient (see our comments
about emulation in the previous section
-Hardware designers are forced to use slow MPU models during simulations
-System software developers are using inaccurate C models of hardware.
There are two promising solutions that address at least some of the problems
mentioned above: SystemC and SystemVerilog. Both have many interesting
features, but both are still in the development stage (they have not reached the
IEEE standardization phase yet).

There are some success stories describing projects developed using both
solutions, but they are coming from big design houses dealing with ASICS or
even discrete systems. It seems that SystemC and SystemVerilog solutions are
still beyond the budget of a typical FPGA-based SOC project. How will it look in
the future? It is really hard to predict; we may see that one of the solutions
prevails, but we may also end up with two new, high-level languages…

The question is: what does an FPGA designer have to do if he works on an SOC
project right now? There are several systems that integrate existing solutions
(such as co-simulation and acceleration discussed earlier in this article) to
provide an environment better suited to the needs of the FPGA designer of
embedded systems.

Conclusion
The modern FPGA designer faces many different challenges while working on his
or her project. Fortunately there are many solutions to choose from, both
currently available or being developed. We should expect more powerful, user-
friendly tools that will help designers meet new challenges, that will inevitably
appear as the size of FPGAs grow.

You might also like