You are on page 1of 9

APPLYING CONTINUOUS INTEGRATION TO HARDWARE

DESIGN AND VERIFICATION


Francois Cerisier, Chief Executive Officer , AEDVICES CONSULTING, Grenoble, France
(francois@aedvices.com)
Srinivasan Venkataramanan, Chief Technology Officer, VerifWorks, Bengaluru, India
(srini@verifworks.com)
Gurubasappa, Lead Verification Engineer, VerifWorks, Bengaluru, India
(guru@verifworks.com)

Abstract With Agile approaches being applied with more or less success to hardware design and verification
development processes, the question to apply a continuous integration flow in a hardware development process is raised.
Continuous Integration has been used in software starting in the late 90s and had been reinforced with software
development practices such as Extreme Programming and Agile processes.
On another side, starting also in the late 90s, coverage driven verification techniques have led verification engineers to
report coverage metrics and test statuses on regular basis, building test suits and design non- regression results.
Although hardware and software development are different in their development and in their delivery processes, both
practices share common aspects, and in both cases, the objective is to know if, in the end, we can release a functional
product.
This paper presents the main differences between software development and hardware verification in terms of
integration process, with an emphasis of the hardware development constraints which make the Continuous Integration
flow not directly applicable.
We will then present how to adapt the Continuous Integration paradigms to hardware design and verification in order
to minimize the integration feedback loop, increase status visibility while still keeping a coverage driven verification,
requirement based verification and assertion based verification approach in mind. This paper also illustrates how a
Continuous Integration tool such as Jenkins can be used to enhance project visibility, based on the user experience of
adopting this approach on to a couple of projects for the Space Industry and in the IoT domains.
Keywords JENKINS, VLSI, ASIC, FPGA, Continuous Integration.

I. INTRODUCTION
Continuous Integration has been developed with the objectives to reduce integration time and to provide
continuous software updates (so called builds).
Continuous integration is a practice which enables software development to be merged as often as possible, up to
several times a day and maintain a working build of the software. As defined in [1], in Continuous Integration,
software projects are developed in the source control mainline and each functionality is developed together with
automated self-checking tests to capture broken builds. When a function is developed, it is first merged in the
developers workspace, then the build is run locally and when all integration issues are resolved, the development is
pushed to the mainline. A build Server then runs the build to ensure that nothing is broken in the mainline.
When a build is broken, it shall be the priority to the development team to fix it, so that the mainline only keeps
functional builds ready for delivery.
In the continuous integration paradigm, a build is either pass (Green), or broken (Red). Some have extended to
the unstable build (Yellow) to highlight builds that compiled but for which not all the tests are passing.
Christophe Demaray [2] summarizes the main rule that software developers should follow to enable continuous
integration:
1. Check-in regularly to the mainline
2. Have an automated test suite / use Test Driven Development
3. Always run tests locally before committing
4. Dont commit broken code to the mainline!
5. Dont check-in on a broken build
1
6. Fix broken build immediately
7. Time-box fixing before reverting
8. Never go home on a broken build
9. Dont comment out failing tests
10. Keep your build fast

It is important to notice that rules #3, #4, #5 are meant to keep the mainline functional, while rules #6, #8 and#10
are meant to fix broken builds as soon as an problem occurs.
To enable quick feedback loops, builds also require to be fast. [1] makes the case that an acceptable build time
is around 10 minutes. With longer builds, a pipelined build approached can be used, reporting longer tests only
once shorter builds are passed.

Figure 1 Continuous Integration / Delivery Flow

Since builds are self-checking, this approach can potentially lead to a continuous delivery flow (Figure 1): valid
builds are sent up to several times a day to a software store or an update server so that users can freely download the
latest software update. This has been used for years in open source projects to keep users updated with latest working
builds.
II. Hardware Development and Verification Process
To the contrary, hardware developments are meant to deliver only one main functional release: the final chip. But
the hardware development process is not just a single shot final build.
Functional blocks are first developed (still mainly at the RTL level), integrated and updated all along the project
lifetime.
From RTL design, we need to integrate DFT scans, synthesize, static timing analysis, Place & Route, etc... prior to
get a set of masks and then go in production for a first sample of silicon which will still need to be packaged and
integrated on a printed circuit board (PCB).
Additionally, in parallel to hardware development, independent verification activities are performed from
independent verification teams to implement test-benches, verification environments, test cases, functional coverage.
(See Figure 2)
Amongst other verification tasks, verification IPs and verification environments are developed using Object
Oriented Programming Hardware Verification Languages (HVL). These provide the ability to build test suites
and automatic non-regression results

2
Figure 2 - Verification Process Overview

Throughout the verification activities, some key progress indicators (KPI) are collected and communicated to the
managements to reports projects status.
These KPIs usually include:
- Pass/Fail Test Results
- Code coverage percentage
- Functional Coverage results
o Percentage
o Per cover groups
o Per Verification Plan Sections
o Per Requirements
- Assertion Cover/Pass/Fail Status
- Bug Counts (new, open, closed) over time

III KEY BARRIERS TO CONTINUOUS INTEGRATION IN HARDWARE DEVELOPMENT


The first point to highlight about using continuous integration in hardware development is that the ultimate goal
of hardware design is to have one and only one final release, namely the final chip. So as such, continuous integration
makes no sense at all.
However, at the RTL design level only, functionalities are developed within RTL modules, modules are integrated
into higher level modules or SoCs, and functional tests are performed by the verification teams at both unit level and
SoC level. To that extend, providing that the RTL development targets the design functionalities, and that verification
activities provide self-checking non-regression status, some parallelism may be made between software development
and RTL development processes and Agile methods may be adapted to Hardware (see also [5]).
Lets analyze the main 10 rules of continuous integration in regards to RTL development:
1. Check-in regularly to mainline
The nature of RTL development leads engineers to carefully design their RTL in order to fulfil different types of
requirements and constraints, in terms of performance, timing analysis, power consumption, design sizes as well as
implementing the actual functions. Also, integrating an IP from an external provider may take days to understand the
IP configurations, to implement the necessary clock tree, reset control and other glue logics around it and update the
interconnect to enable the IP functionalities. There is therefore little room for designers to perform several check-ins
per day without breaking the integration. Somehow, this is not a big issue, as long as the RTL codes are checked-in as
soon as a function is implemented. We should take regularly here as as soon a new functionality is implemented

2
2. Have an automated test suite
Hardware development teams are used to have a functional verification teams to implement automated self-
checking tests. However, since this is done by different teams, there is usually a time shift between a function is
made available for simulation and the time the verification team implements the related tests.
3. Always run tests locally before committing
If tests are already available, it should be possible for a designer to run them. In the other case, as said above, this
will not be possible until tests are made available from the verification team. A checked-in feature may therefore not
be tested at all, and can be considered only as a primary release for the verification team to start playing with the
design.
4. Dont commit broken code to the mainline!
To the contrary of software developers, hardware designers may have good reasons to do so. This include releasing
non-functional structural RTL to deliver to the STA, DFT or backend-end engineers to start structural analysis of the
code (power consumption, size, floorplan, ). If such practice cannot be avoided, functional workarounds need to be
implemented to keep the functional status artificially passing (Green). Other KPIs should however highlight the
workaround existence and report a failed testsuite ( red) as long as the workaround is active and the guilty RTL still
not functional.
5. Dont check-in on a broken builds
In the first place, the same reasons and same analysis as for the rule 4 applies here. Additionally, when new RTL
functions are developed, test non-availability, lack of know-how to run the test suite from the design teams, simulation
license cost, simulation time, may prevent to run a build locally prior to commit in the mainline for automatic
regression status.
6. Fix broken build immediately
As said earlier, designers and verifiers are different people and teams. Additionally, the cause of the problem may
not lie in the same team. For instance, an RTL block which is integrated at the SoC level may require the interconnect
designer to update the interconnect. This may also take days to get this other team the necessary bandwidth and update
the top level.
Another point is that verification team may test a feature weeks after it has been integrated. Besides, the feature
may come from an external provider. In this case, the fix will be provided thru a support request and an official new
release of the IP, which itself needs to be integrated by the design team who is now working on different projects, if
not on vacations.
7. Time-box fixing before reverting
This rule typically say a software developer to spend no more than 10 or 20 minutes debugging a broken build. If
the debug takes longer, the revert the mainline to the previous stable state and take the time it requires to debug. In
hardware, its not rare to have a single simulation to last over than 30 minutes if not 6 hours. The debug time might
also take over a few days to identify a bug and find a viable RTL solution to fix it.
8. Never go home on a broken build
Due to the debug time, it may take to fix a hardware problem, this is not always applicable. As per the author
experience, it happened to spend up to 3 weeks on one single complex debug. Applying this rule will lead to no social,
no family life for design or verification engineers.
Hence, the rule saying a software developer should not go home before builds is fixed is not reasonable.
As stated in [1], even in software an alternative to this is to revert the change and fix it locally prior to commit the
change again. Due to the different teams involved, in our case a branch will preferable to working locally.
9. Dont comment out failing tests
This is probably the most problematic rule when applying Continuous Integration on hardware development. The
author has experienced with hardware teams who were practicing this. To the author concerns, commenting out
previous passing tests is just hiding the red light by a green one. This is surely something the author do not consider
as an option as this makes false positive status and hide actual non-functional blocks status.
10. Keep your build fast
Martin Fowler [1] recommends to keep a software build around 10 minutes run. If a longer time is required, we
may use pipeline stages to run more completed builds per stages.
3
In hardware, a single and simple simulation usually takes no less than 10 minutes at unit level and an hour at SoC
level and a complex SoC test may take over a few days to run. Hence, this rule definitely needs to be adapted in
hardware development.
IV. ADAPTING CONTINUOUS INTEGRATION IN HARDWARE DEVELOPMENT
Although there are some barriers to adopt continuous integration in its ideal process, there are some benefits
hardware development can take advantage of adapting some of the key issues.
The first benefit is to keep the integrated RTL as functional as possible at any time. This will smooth the overall
verification tasks and will also enable a clear visibility of what is implemented, functional and tested. To keep this, in
the cases that non-functional blocks are integrated, it is recommended to ensure they do not break other functions. As
an example, when provisions are made on a switch or an interconnect to integrate an additional master, a simple
dummy instance driving the default bus value will make other IPs to continue working.
Another point is that continuous integration is capable of quickly highlighting commits which introduced a
functional regression (which break the builds). This will this case only if the regression was entirely passed or if the
tooling is capable of comparing individual tests and highlighting those that are newly failing do to a particular commit.
Finally, since Continuous Integration tools runs tests on regular basis, we can use the flow to build the
verification KPIs as well as our assertion based verification mapping and coverage.
As broken regressions are visible to anyone in the projects, this also accelerate the feedback loop, enabling quick
communication between the different teams to fix any issues.
Using a continuous integration flow in hardware therefore needs the following adaptions:
- Commit in the mainline as soon as a function is implemented. This will provide the function to be
verified quickly to the verification team and on regular basis.
- In a Continuous Integration process, we cannot count on the fact that a function is integrated by
the design team as long as tests are not made available from the software team. This however
implies the verification team to be aware of the function availability to be tested.
- Isolate sanity tests which runs quickly and create a dedicated test suite with these.The author
recommend for example to have 1 test which performs register checks on each block, and 1 simple
functional tests of each block, so that if something goes fundamentally wrong in the integration, it
is captured with a minimum set of tests and in a minimum simulation time.
- Develop a more complete or a full testsuite will be run either overnight or on weekly basis
depending on the runtime, and keep the quick test suite to run as soon as there is a commit.
- Isolate tests which are known to be failing in a separate regression, or mark them as known failing
to allow the tool to report an unstable build (yellow light) as opposed to a failing build (red
light).
- Setup Pipelined Builds such as to separate:
o Short functional regression status (on each commit or on regular basis several times a day if the
mainline changed)
Green if all tests passed
Yellow if there are known failing tests
Red if there are new failing tests
o full functional regression status (daily or weekly if the mainline changed)
Green if all tests passed
Yellow if there are known failing tests
Red if there are new failing tests
o KPI status, based on the full regression analysis
Green if all targets are reached at 100%
Yellow if targets are not reached.
Red if some assertions or tests failed
o Requirement Coverage Status, based on the full regression analysis
Green if all requirements are covered and all tests passed
Yellow if requirements are not fully covered
Red if some assertions or tests failed
- Author experience has also demonstrated interesting approach with pipelined builds to highlight
workaround status. The first build showing the artificial passing result with the workarounds, the
latest one showing the failed result, disabling all workarounds and rerunning the related tests.
4
V. CUSTOMER USE CASE 1: USING JENKINS FOR REQUIREMENT BASED VERIFICATION STATUS
The above adaptations have been realized on a hardware development of a couple of FPGAs targeting spatial
applications, being developed under a requirement based design verified with a UVM coverage driven testbench. We
have used Jenkins Continuous Integration platform with a few plugings and python scripts to reports the results from
simulation to the server.
In our cases, we had tests separated in three main aspects:
- Sanity tests, which we run on every commit
- Simple Functional tests, which could take a few hours to run and which we run every night.
- Complete Functional tests, which lasts over a couple of days and which we run every week.

The sanity tests took around 20 minutes to run. They were basically doing one register read per blocks, enabling
a few basic functionalities in each block.
Having this running on every commit (up to once every 15 minutes) has revealed to be very useful in terms of
interactions between design and verification teams.
For instance, it happened a few times to forget to commit some files, which made the regression failing. The
guilty guy had fixed the issue even before other teams had a chance to check-out the failing design.
On other occasions, design modifications had required the verification environment to be updated. Continuous
Integration results analysis had quickly highlighted which parts needed to be updated.
Another aspect of using Continuous Integration is that we have made the KPI visible to everyone working on the
project. It was therefore very easy to have project status.
Additionally, as we were in a assertion based project, we have mapped functional metrics to requirements such
as each verification aspect of each requirement could be traced. As Continuous Integration tools do not directly
offer a requirement based approach, we have mapped these metrics to sub-class test groups named as the
requirements.

Using a two-step flow with 1 step to run the regression, and 1 step to analyze the coverage database in regards to the
requirement matrix, we were able to provide per requirement pass/fail criteria status directly from the tool.

5
Figure 3 - Per requirement status

VI. CUSTOMER USE CASE 2: USING JENKINS FOR ASSERTION BASED VERIFICATION STATUS
The above adaptations have been realized on a couple of ASIC applications, being developed under a requirement
based design verified with a UVM coverage driven testbench and Assertions. We have used Jenkins Continuous
Integration platform with a few plugings and python scripts to reports the results from simulation to the server.
- A couple of register tests, all other tests relying on assertions
The register test took around few minutes to run which are basically doing write and read in the registers, we do
have couple of read register which are volatile in nature. These register tests which is not relying on assertions so
we disable the assertions.
- Most of tests failing until all assertions are passing.
Most of the tests are relying on assertions, a single failure of assertions can cause most of the test to fail as
shown in the (Figure 4 (red)) test build status as shown in the figure 6 ,decline in the failure of assertion will pass
many number of tests. This Assertion status analysis (Figure 5) have given the clearer view of the design maturity.

Figure 4 - Test status Analysis

Figure 5 Assertion Status Analysis

6
Figure 6 Test Build Status Analysis
- Build status on each commit
Disable assertions under debug
Enabled to quickly fix design vs verification alignment issues
Always keep main line / trunk functional
Broken in two occasions on big design changes, required 1 day each to realign.

V. CONCLUSION AND RECOMMENDATIONS


Although hardware development comes with its loads of constraints, there are interesting benefits of adapting
the Continuous Integration paradigms to hardware development. Users experience has shown that Continuous
Integration practices can increase the interaction between the verification team and the design team on one side,
but also in regards to the management, providing better understanding of the design status, requirement verification
status as well as almost real time KPI.
Due to tool license costs, debug time and the split between hardware and verification teams, Continuous
Integration cannot be applied directly without careful adaptation.
Hardware commits should focus on functional aspects as much as possible, verification teams should be reactive
to the newly introduced features to provide functional test as early as possible, and pipelined build stages should
be used to keep on one hand a self-checking non-regression status to highlight any broken features in the first
stages and on the other hand verification metric targets (coverage, assertions, requirements) on later stages to
highlight the overall verification status which will be green only at the end of the project.

ACKNOWLEDGMENT
We would like to thank our mentors Mr. SrinivasanVenkataramanan, Co-Founder, Verification
Technologist at VerifWorks& Mrs. AjeethaKumari, Serial Woman Entrepreneur at CVC, Francois Cerisier,
Chief Executive Officer , AEDVICES CONSULTING, for their continuous motivation and guidance. Last
but not least a big thanks to entire CVC Team and AEDVICES Team for overwhelming support.

REFERENCES:

[1] Continous Integration, Martin Fowler, 01 May 2016,


http://martinfowler.com/articles/continuousIntegration.html
http://chercheurs.lille.inria.fr/~demarey/Main/ContinuousIntegration
[2] What is Coninuous Integration, Christophe Demaray, April 12, 2013
http://chercheurs.lille.inria.fr/~demarey/Main/ContinuousIntegration
[3] Building Value with Continuous Integration
http://www.collab.net/content/building-value-continuous-integration
[4] Continuous Integration is Dead, Yegor Bugayenko, 8 October 2014 / 26 September 2016
http://www.yegor256.com/2014/10/08/continuous-integration-is-dead.html

7
[5] Are we too Hard for Agile ? Franois Cerisier and Mike Bartley, IP-SOC, December 2012.
http://www.design-reuse.com/articles/32442/are-we-too-hard-for-agile.html

You might also like