Professional Documents
Culture Documents
by Kevin Morris
Fifteen years ago verification of FPGA designs was easy: you only needed a
decent gate-level simulator to verify a circuit containing several thousands of
logic elements. As the size of FPGAs started to grow, so did the complexity of the
designs implemented in them.
Over time, hardware description languages sneaked into schematic designs and
eventually replaced schematic entry.
Today, it is quite common that FPGA users have to deal with more than one
language in their designs (e.g. original sources in VHDL with some IP core in
Verilog). At earlier stages of the design development it may be necessary to
interface HDL simulation with environments using Domain Specific Languages,
such as Matlab. To speed up testbench simulations, patches written in C/C++ are
used frequently. Sometimes, when simulation is still too slow, hardware
acceleration may be necessary. In the last two years embedded systems found
their way into the FPGA domain, adding one more headache – how to test both
system software and system hardware in the simulation environment not
prepared for this task.
In this article we analyze sample solutions to the problems mentioned above that
make the life of a modern FPGA system designer much easier.
As long as design size remained within the range of several thousand gates,
schematic design entry and a good gate-level simulator were enough to create
and verify the entire design. Hardware descriptions languages started to sneak
into schematic designs in the shape of HDL macros, and as designs migrated
into tens of thousands of gates they gained importance. By the time FPGAs
crossed the100,000 gate threshold it became obvious that HDLs would have to
eliminate schematic entry and gate-level simulators. The two most important
factors were:
Of course it was just the first step: VHDL and Verilog were quickly joined with
traditional programming languages (C/C++) and domain-specific languages
(Matlab). In the following sections we demonstrate how an FPGA designer can
deal with challenges created by this diversity.
The first HDL simulators were usually dealing with one language only. When two
languages had to be handled in one design, co-simulation using both VHDL and
Verilog simulators was the obvious solution. Please note that the Unix platform
seemed better suited for this solution than Windows.
-The use of one simulation engine means that designers don’t have to fight with
configuring multiple tools to co-simulate properly.
-The growing size of designs creates the pressure to increase speed and reduce
resource usage during simulation; a single kernel makes any kind of simulation
optimization easier than separate kernels.
-A single kernel simulator can be easily turned into a VHDL-only or Verilog-only
simulator via licensing options, eliminating the need of maintaining multiple tools
by the software vendor.
Single kernel simulators supporting Verilog and VHDL are very popular and
should be the first choice for anybody working in a mixed-language environment.
Some tools can handle even more description formats in one kernel, e.g. EDIF
netlist.
PLI and VHPI give design and verification engineers developing C/C++ routines a
mechanism that shields them from the low-level details of their simulator
operations that are irrelevant to the verified design functionality. Since PLI (or
VHPI) is standardized, both C code and matching PLI calls should be much
easier to port between different simulation platforms. But one non-standard area
still remains: the connection of the PLI (VHPI) engine with the simulation kernel.
Procedures involved here vary dramatically between different simulators and may
look like black magic to designers that are not professional C/C++ programmers.
Fortunately a little bit of good will shown by a simulator vendor can eliminate this
last hurdle. A small applet or wizard (like the one shown in Figure 1) should be
able to create low-level interface files.
After entering his or her code, the user compiles and links both files, receiving a
dynamically linked library that can be used during simulation.
Co-simulation with Domain Specific Languages
Quite frequently HDLs are not the best choice to start the description of a digital
system. If the design has to implement advanced mathematical operations,
Matlab is a very convenient environment for quick verification of ideas. For many
DSP designs using algorithms published in C, a toolset similar to Celoxica’s DK2
with Handel-C support will be the best choice.
In both cases we are dealing with domain specific languages used for the
description of the design. Once the initial description is verified in its native form,
the designer faces the task of implementing that description in hardware. Some
solutions translating domain specific language descriptions directly into the
vendor-specific netlist may exist, but the traditional approach involves a gradual
translation of original files to HDLs, and then continuing the implementation in the
classic FPGA design flow.
The key issue here is maintaining the design integrity during DSL to HDL
translation. One simulator alone will not be helpful; but working together during
HDL/DSL co-simulation they can be very useful.
Of course various co-simulation solutions exist, but the user effort required to
make them work in any particular design may be discouraging to the designers.
Let’s consider the case of converting one block of Matlab description of the
design to a VHDL design unit.
Once the designer has the VHDL model with functionality (supposedly) identical
to the original Matlab description, they need to create an interface between the
data systems of both environments. For every port of a VHDL entity they have to
specify at least typecast (pair of matching data types in VHDL and Matlab). If the
port happens to be a vector, there are several additional tasks: specification of
the number of bits in the integer and fractional part, quantization method and
overflow handling mechanism. Then there is the important task of convincing
Matlab’s Simulink that the VHDL descriptions are ready for co-simulation.
Once the black-box is created, and before starting co-simulation in Simulink, the
designer may have to adjust some additional parameters, such as a sampling
period. When co-simulation is running, Scope from the Matlab environment can
be used to visualize native Matlab signals and ports of the black-boxes; to
observe internal black-boxes signals, it is necessary to use an HDL simulator.
Please note that Matlab provides an open method of adding blocksets for co-
simulation; the actual blockset creation is the task of the user connecting his or
her simulator. Good HDL simulators should provide an automated method of
generating Simulink blocksets. In the Windows environment they will usually take
the shape of Wizards. Figure 2 presents a sample solution: a wizard started for
each HDL module that should have its black-box for co-simulation. Upon
completion of all wizard sessions specifying common output directory, a Simulink
blockset is created automatically in the specified location.
Figure 2. Sample Matlab (Simulink) Co-simulation Wizard
Speeding-up simulations
As the size of FPGA design grows, the decrease of pure HDL simulation
performance becomes noticeable. When verification procedures take hours to
execute, it is time to think about hardware acceleration.
-ACCELERATION assumes that only a part of the design is pushed into the
hardware; the rest is kept in the HDL simulator environment and co-simulates
with the hardware part. Very efficient communication protocols between the
board and the simulation kernel are required to maintain a significant increase of
verification speed in this methodology, but even if they are available, the wrong
selection of design modules pushed into hardware may nullify gains introduced
by acceleration. That’s why profiling of the design being verified is essential:
modules occupying a significant portion of simulation time should be pushed into
hardware first.
Although acceleration is slower than emulation, it is easier to implement when a
high level of design visibility is required during verification. FPGA designs with a
high percentage of original HDL code will probably benefit more from
acceleration. Designs with heavy use of IP cores and previously created and
verified modules may only need emulation. Design houses working on a diverse
range of projects will see an equal demand for emulation and acceleration.
Challenges introduced by FPGA-based embedded
systems
For a long period of time changes in the methods of creating and verifying FPGA
designs were more evolutionary than revolutionary. But once the size of FPGAs
became large enough to place the entire microprocessor inside (with enough
room left for some peripherals), revolution had to come.
-Design verification cycle is longer (each detected error on the hardware side
requires re-creation of the prototype)
-Visibility of the design during verification may be insufficient (see our comments
about emulation in the previous section
-Hardware designers are forced to use slow MPU models during simulations
-System software developers are using inaccurate C models of hardware.
There are two promising solutions that address at least some of the problems
mentioned above: SystemC and SystemVerilog. Both have many interesting
features, but both are still in the development stage (they have not reached the
IEEE standardization phase yet).
There are some success stories describing projects developed using both
solutions, but they are coming from big design houses dealing with ASICS or
even discrete systems. It seems that SystemC and SystemVerilog solutions are
still beyond the budget of a typical FPGA-based SOC project. How will it look in
the future? It is really hard to predict; we may see that one of the solutions
prevails, but we may also end up with two new, high-level languages…
The question is: what does an FPGA designer have to do if he works on an SOC
project right now? There are several systems that integrate existing solutions
(such as co-simulation and acceleration discussed earlier in this article) to
provide an environment better suited to the needs of the FPGA designer of
embedded systems.
Conclusion
The modern FPGA designer faces many different challenges while working on his
or her project. Fortunately there are many solutions to choose from, both
currently available or being developed. We should expect more powerful, user-
friendly tools that will help designers meet new challenges, that will inevitably
appear as the size of FPGAs grow.