You are on page 1of 33

Verification concepts

ASIC DESIGN

The term Asic stands for Application Specific Integrated Circuit. Is an integrated circuit (IC) customized for a particular use, rather than intended for general-purpose use. Generally an ASIC design will be undertaken for a product that will have a large production run, and the ASIC may contain a very large part of the electronics needed on a single integrated circuit. As feature sizes have shrunk and design tools improved over the years, the complexity in an ASIC has grown from 5,000 over 100 million gates

Mrd A Marketing Requirements Document (MRD) outlines the requirements a new product. Engineers use an MRD to create the product. Marketing requirement document covers market needs, the customer value proposition, and product functionality. It is developed by the Marketing team and upper management. Architecture Specification The architect based on the MRD specification, develops the overall architecture of the chip. This is a very high level plan. Architecture Specification includes functional descriptions of each module, Properties and weights. Design Specification The designers and architects sit together to come up with detailed design documents. Design strategies, design partitions, type of memories to use, etc.

Verification Plan A Verification specification is called a Verification/test plan. The verification engineer goes through all the above documents and prepares verification plan to verify the design. Rtl Design RTL stands for Register Transfer Level. The designer starts implementing the RTL design in HDL like verilog or VHDL. Functional Verification The verification engineers starts developing TestBench and verifies whether the DUT works according to specification or not. Synthesis Synthesis is the process of taking a design written in a hardware description language, compiling it into a net list of interconnected gates which are selected from a user-provided library of various gates. The design after synthesis is a gate-level design. Physical Design Physical design process includes logic partitioning, floor planning, global routing, detailed routing, compaction, and performance-driven layout. PD team transforms net list representation of a system into layout representation. Timing Analysis Static timing analysis is an important step in analyzing the performance of a design. In the Timing analysis Setup time, hold time ,recovery time ,removal time , Clock latency, clock skew, clock uncertainty etc checks are done. Tapeout

This is the final stage of the design cycle of integrated circuits. Once all the checks are done, the design is ready to be sent to Foundry.

BOTTLE NECK IN ASIC FLOW

What is Bottle neck in the asic design flow? Verification consumes 50% to 70% of the effort of design cycle and is on the critical path in the design flow of multimillion gate ASICs, so verification became the main bottleneck in the design process. The functional verification bottleneck is an effect of rising the design abstraction level. Majority of ASICs require at least one re-spin with 71% of re-spins are due to functional bugs.

FUNCTIONAL VERIFICATION NEED

Why we need functional verification? To build confidence and stay in business. A primary purpose for functional verification is to detect failures so that bugs can be identified and corrected before it gets shipped to costumer. If RTL designer makes a mistake in designing or coding, this results as a bug in the Chip. If this bug is executed, in certain situations the system will produce wrong results, causing a failure. Not all mistakes will necessarily result in failures. The bug in the dead code will never result in failure. A single mistake may result in a wide range of failure symptoms. Not all bugs are caused by coding errors. There are possibilities that error may in the specification itself. Sometimes miscommunications between teams may lead to wrong design.

Example of coding error: 1 reg [1:0] state; 2 3 parameter zero=0, one=1, two=2, three=3; 4 5 always @(state) 6 begin 7 case (State) 8 zero: 9 out = 4'b0000; 10 one: 11 out = 4'b0001; 12 two: 13 out = 4'b0010; 14 three: 15 out = 4'b0100; 16 default: 17 out = 4'b0000; 18 endcase 19 end There is a coding error in the preceding example. Designer declared "state" in the line-1. Later in the code it is reference in the line-5. In the line-7 also, the designer intention is to refer "state". But mistakenly he typed "State". Verilog is a case-sensitive language, and variable "State" and "state" are different and this will produce wrong results.

TESTBENCH
What is TestBench? TestBench mimic the environment in which the design will reside. It checks whether the RTL Implementation meets the design spec or not. This Environment creates invalid and unexpected as well as valid and expected conditions to test the design.

LINEAR TESTBENCH
Linear TestBench is the simplest, fastest and easiest way of writing testbenchs. This became novice verification engineer choice. It is also slowest way to execute stimulus. Typically, linear testbenchs are written in the VHDL or Verilog. In this TestBench, simple linear sequence of test vectors is mentioned. Stimulus code like this is easy to generate translating a vector file with a Perl script, for example. Small models like simple state machine or adder can be verified with this approach. The following code snippet shows linear TestBench. The code snippet shows some input combination only. This is also bad for simulator performance as the simulator must evaluate and schedule a very large number of events. This reduces simulation performance in proportion to the size of the stimulus process. Typically, linear testbenchs perform the following tasks: Instantiate the design under test (DUT) Stimulate the DUT by applying test vectors. Output results waveform window or to a terminal for visual inspection manually. Example: Linear TestBench module adder(a,b,c); //DUT code start input [15:0] a; input [15:0] b; output [16:0] c; assign c = a + b; endmodule //DUT code end module top(); //TestBench code start reg [15:0] a; reg [15:0] b; wire [16:0] c; adder DUT(a,b,c); //DUT Instantiation initial begin a = 16'h45; //apply the stimulus b = 16'h12; #10 $display(" a=%0d,b=%0d,c=%0d",a,b,c); //send the output to terminal for visual inspection end endmodule //TestBench code end

To test all possible scenarios which are known to us, it is not an easy task. Development time increases exponentially as the number of scenarios increases and maintain them is nightmare. Instead of listing out all the possible scenarios, pickup some randomly and check the DUT.

NOTE TO NO VOICE ENGINEERS: Generally tendency of any novice engineer is to see the outputs in the waveform viewer. Waveform viewers are for debugging designs, not for testbench. Most of the operation in TestBench executes in zero time, where waveform viewer will not be helpful. All the examples in the book outputs messages to terminal for analyzing its behavior.

LINEAR RANDOM TESTBENCH


Random TestBench don't use Hardcoded values like linear testbenchs. Input stimulus is generated using random values. In Verilog, system function $random provides a mechanism for generating random numbers. The function returns a new 32-bit random number each time it is called. These test cases are not easily readable and are also not reusable. New tests have to be created when the specification or design changes, to accommodate the changes. The main disadvantage of this testing is that we never know what random values are generated and it may waste simulation cycles by generating same values again and again. EXAMPLE: Linear Random TestBench module adder(a,b,c); //DUT code start input [15:0] a,b; output [16:0] c; assign c = a + b; endmodule //DUT code end module top(); //TestBench code start reg [15:0] a; reg [15:0] b; wire [16:0] c; adder DUT(a,b,c); //DUT Instantiation initial repeat(100) begin a = $random; //apply random stimulus b = $random; #10 $display(" a=%0d,b=%0d,c=%0d",a,b,c); end endmodule //TestBench code end

HOW TO CHECK THE RESULTS


How does a Verification engineer check whether the results obtained from the simulation match the original specification of the design? For simple testbenchs like the above, output is displayed in waveform window or messages are sent to terminal for visual checking. Visually checking is the oldest and most labor intensive technique. The quality of the verification depends on the determination and dedication of the individual who is doing the checking. It is not practical to verify a complex model merely by examining the waveform or text file. Whenever a change is made to the DUT to add a new feature or to fix a bug, same amount of effort needs to be deployed to check the simulation results.

SELF CHECKING TESTBENCHS

A self-checking TestBench checks expected results against actual results obtained from the simulation. Although Self-checking testbenchs require considerably more effort during the initial test bench creation phase, this technique can dramatically Reduce the amount of effort needed to re-check a design after a change has been made to the DUT. Debugging time is significantly shortened by useful error-tracking information that can be built into the TestBench to show where a design fails.

A self-checking TestBench has two major parts, the input blocks and output blocks. Input block consist of stimulus and driver to drive the stimulus to DUT. The output block consists of monitor to collect the DUT outputs and verify them. All the above approaches require the test writer to create an explicit test for each feature of the design. Verification approach in which each feature is written in a separate test case file is called directed verification. EXAMPLE: adder example module adder(a,b,c); //DUT code start input [15:0] a,b; output [16:0] c; assign c = a + b; endmodule //DUT code end module top(); //TestBench code start reg [15:0] a; reg [15:0] b; wire [16:0] c; adder DUT(a,b,c); //DUT Instantiation initial repeat(100) begin a = $random; //apply random stimulus b = $random; #10 $display(" a=%0d,b=%0d,c=%0d",a,b,c);

if( a + b != c) // monitor logic. $display(" *ERROR* "); end endmodule //TestBench code end

HOW TO GET SCENARIOS WHICH WE NEVER THOUGHT

In Directed verification, the Verification Environment has mechanism to send the Stimulus to DUT and collect the responses and check the responses. The Stimulus is generated in Tests case. Directed testbenchs may also use a limited amount of randomization, often by creating random data values rather than simply filling in each data element with a predetermined value. Each test case verifies specific feature of the design. This becomes tedious when the design complexity increases. As circuit complexity increases, it becomes more difficult to create patterns that fully exercise the design. Test case maintenance become harder and time consuming.

In Directed Verification, test writer has to list out each feature. Test writer can't think of all possible potential bug scenarios and there are chances that Bugs will escape. With these approaches, the bugs lurking in these corners hide until late in the development cycle, or aren't found at all until product is taped out. Solution to the above problems is Constraint random verification. Using constraint random verification, the stimulus required to verify test features are generated automatically. Test writer specifies set of specification, and the TestBench automatically creates solution space and picks up scenarios from the solution space. Constraint random verification also reduces manual effort and code for individual tests. As the scenarios are generated automatically by the TestBench, the number of test case files gets reduced. In Directed verification, some of the tests share similar logic, if the engineer has to change the logic which is common to

certain group of tests, then he has to edit all the test case files and it is time consuming. But in Constraint random verification, the number of tests case files will be very less, so changes will be mostly in environment and minimal. Directed verification with a fairly simple TestBench, verification engineers can start finding bugs in simulation almost immediately even before the TestBench is fully completed. With a constrained-random verification environment, there is an up-front cost that must be invested before the first test can be run. Constraint-based generators can be easily converted into checkers if required.

Generating fully randomly is meaningless as it may generate invalid scenarios and it may also regenerate the same scenario again and again wasting potential simulation time. A User must define data structures, which represent stimulus applied to the DUT input. Next, constraints must be defined to guide the random generator. Using constraint, the solution space is defined, and randomization picks up scenarios randomly from the solution space. Constraints act as knobs in the TestBench which control the generator's randomness. The main disadvantage of constraint random verification is we never know how well the DUT is verified. If verification engineer can get the information about the logic in DUT which is not verified, he can further constraint the randomization or write directed testcases to exercise the unverified logic.

HOW TO CHECK WHETHER THE TESTBENCH HAS SATISFACTORILY EXERCISED THE DESIGN

Code coverage is used to measure the efficiency of verification implementation. It provides a quantitative measurement of the testing space. It describes the degree to which the source code of a DUT has been tested. It is also referred as structural coverage. Code coverage answers the questions like

Have all the branches in " Case ", "if" have been entered? Have all the conditions in "if","case" statement is simulated? Have all the variables have been toggles? Have all the statements of the RTL code have been exercised? Have all the states in the FSM has been entered and all the legal transitions exercised? Have all the paths within a block have been exercised?

By applying code coverage analysis techniques to hardware description languages, verification efficiency was improved by enabling a verification engineer to isolate areas of un-tested HDL code. The verification engineer examine a coverage report, seeks out the low values and understands why that particular code hasn't been tested fully and writes more tests or directs randomness to cover the untested areas where there may be a possibility of bug hiding. No additional coding is required to get 100 percent code coverage , the tool would automatically show the item as covered if the required test scenario(s)/combination(s) is(are) exercised. In unit level verification, a module by module is verified in its own test environment to prove that the logic, control, and data paths are functionally correct. The goal of module level verification is to ensure that the component/unit being tested conforms to its specifications and is ready to be integrated with other subcomponents of the product. Code coverage becomes a criterion for finishing unit level testing as it needs to verify every feature of component/unit. In sub-system level /system level, the goal is to ensure that the interfaces among the units are correct and the units work together to execute the functionality correctly. In sub system level /system level testing, code coverage may not be useful as the verification is not targeted at all the features of the unit.

TYPES OF CODE COVERAGE


There are a number of coverage criteria, they are: Statement coverage /line coverage Block/segment coverage Conditional coverage Branch coverage Toggle coverage Path coverage Fsm coverage The following example is considered while explaining code coverage types in further sections.

EXAMPLE 1 2 module dut(); 3 reg a,b,c,d,e,f; 4 5 initial 6 begin

7 #5 a = 0; 8 #5 a = 1; 9 end 10 11 always @(posedge a) 12 begin 13 c = b && a; 14 if(c && f) 15 b = e; 16 else 17 e = b; 18 19 case(c) 20 1:f = 1; 21 0:f = 0; 22 default : f = 0; 23 endcase 24 25 end 26 endmodule

STATEMENT COVERAGE
Statement coverage, also known as line coverage is the easiest understandable type of coverage. This is required to be 100% for every project. From N lines of code and according to the applied stimulus how many statements (lines) are covered in the simulation is measured by statement coverage. If a DUT is 10 lines long and 8 lines of them were exercised in a test run, then the DUT has line coverage of 80%. Line coverage includes continuous assignment statements, Individual procedural statements, Procedural statement blocks, Procedural statement block types, Conditional statement and Branches for conditional statements. It considers only it the executable statements and statements which are not executable like module, endmodule, comments, timescale etc are not covered. Statement coverage report of the above example: There are total 12 statements at lines 5,7,8,11,13,14,15,17,19,20,21,22 Covered 9 statements. They are at lines 5,7,8,11,13,14,17,19,22 Uncovered 3 statements. They are at line 15,20,21 Coverage percentage: 75.00 (9/12)

BLOCK COVERAGE
The nature of the statement and block coverage looks somewhat same. The difference is that block coverage considers branched blocks of if/else, case branches, wait, while, for etc. Analysis of block coverage reveals the dead code in RTL.

Block coverage report of the above example: There are total 9 blocks at lines 5,7,8,11,15,17,20,21,22 Covered 6 blocks. They are at lines 5,7,8,11,17,22 Uncovered 3 blocks. They are at line 15,20,21 Coverage percentage: 66.67 (6/9)

CONDITIONAL COVERAGE
Conditional coverage also called as expression coverage, will reveals how the variables or sub-expressions in conditional statements are evaluated. Expressions with logical operators are only considered.. The downside is that the conditional coverage measure doesn't take into consideration how the Boolean value was gotten from the conditions. Conditional coverage is the ratio of no. of cases checked to the total no. of cases present. Suppose one expression having Boolean expression like AND or OR, so entries which is given to that expression to the total possibilities is called expression coverage. Conditional coverage report of the previous example: At LINE 13 Combinations of STATEMENT c = (b && a) B = 0 and a = 0 is Covered B = 0 and a = 1 is Covered B = 1 and a = 0 is Not Covered b = 1 and a = 1 is Not Covered At LINE 14 combinations of STATEMENT if ((c && f)) C = 0 and f = 0 is Covered C = 0 and f = 1 is Not Covered C = 1 and f = 0 is Not Covered C = 1 and f = 1 is Not Covered Total possible combinations: 8 Total combinations executed: 3

BRANCH COVERAGE
Branch coverage which is also called as Decision coverage report s the true or false of the conditions like ifelse, case and the ternary operator (? :) statements. For an "if" statement, decision coverage will report whether the "if" statement is evaluated in both true and false cases, even if "else" statement doesn't exist. Branch coverage report of the example: At line 15 branch b = e; not covered

At line 17 branch At line 20 branch At line 21 branch At line 22 branch

e = b; covered 1: f = 1; not covered 0: f = 0; covered default: f = 0; not covered

Coverage percentage: 40.00 (2/5)

PATH COVERAGE
Path coverage represents yet another interesting measure. Due to conditional statements like if-else, case in the design different path is created which diverts the flow of stimulus to the specific path.

Path coverage is considered to be more complete than branch coverage because it can detect the errors related to the sequence of operations. As mentioned in the above figure path will be decided according to the if-else statement According to the applied stimulus the condition which is satisfied only under those expressions will execute, the path will be diverted according to that. Path coverage is possible in always and function blocks . Path created by more than one block is not covered. Analysis of path coverage report is not so easy task.

Path coverage report of the example: Path Path Path Path Path Path 1 : 15,20 Not Covered 2 : 15,21 Not Covered 3: 15,22 Not Covered 4: 17,20 Not Covered 5 : 17,21 Covered 6 : 17,22 Not Covered

Total possible paths : 6 Total covered path : 1 Path coverage Percentage : 16.67 (1/6)

TOGGLE COVERAGE
It makes assures that how many times variables and nets toggled? Toggle coverage could be as simple as the ratio of nodes toggled to the total number of nodes. X or Z --> 1 or H X or Z --> 0 or L 1 or H --> X or Z 0 or L --> X or Z Above example shows the signal changes from one level to another. All types of transitions mentioned above are not interested. Only 1->0 and 0->1 are important. Toggle coverage will show which signal did not change the state. Toggle coverage will not consider zero-delay glitches. This is very useful in gate level simulation.

Toggle coverage report of the example: Name Toggled 1->0 0->1 a No No Yes b No No No c No No No d No No No e No No No f No No No

FSM COVERAGE
It is the most complex type of code coverage, because it works on the behavior of the design. Using Finite state machine coverage, all bugs related to finite state machine design can be found. In this coverage we look for how many times states are visited, transited and how many sequence are covered in a Finite state machine. State Coverage: It gives the coverage of no. of states visited over the total no. of states. Suppose you have N number of states and state machines transecting is in between only N-2 states then coverage will give alert that some states are uncovered. It is advised that all the states must be covered.

Transition Coverage: It will count the no. of transition from one state to another and it will compare it with other total no. of transition. Total no. of transition is nothing but all possible no. of transition which is present in the finite state machine. Possible transition = no. of states * no. of inputs.

EXAMPLE of FSM: module fsm (clk, reset, in); input clk, reset, in; reg [1:0] state; parameter s1 = 2'b00; parameter s2 = 2'b01;

parameter s3 = 2'b10; parameter s4 = 2'b11; always @(posedge clk or posedge reset) begin if (reset) state <= s1; else case (state) s1:if (in == 1'b1) state <= s2; else state <= s3; s2: state <= s4; s3: state <= s4; s4: state <= s1; endcase end endmodule module testbench(); reg clk,reset,in; fsm dut(clk,reset,in); initial forever #5 clk = ~clk; initial begin clk =0;in = 0; #2 reset = 0;#2 reset = 1; #21 reset = 0;#9 in = 0; #9 in = 1;#10 $finish; end endmodule

FSM coverage report for the above example: // state coverage results s1 | Covered s2 | Not Covered s3 | Covered s4 | Covered // state transition coverage results s1->s2 | Not Covered s1->s3 | Covered s2->s1 | Not Covered s2->s4 | Not Covered s3->s1 | Not Covered s3->s4 | Covered s4->s1 | Covered

MAKE YOUR GOAL 100 PERCENT CODE COVERAGE NOTHING LESS


Never set your goal to anything less than 100% code coverage. Anything less than 100% is a slippery slope. If you set your goal to 98% , may be the most important feature like reset of the system may be in the untested part of2%. If the verification engineer sets the code coverage goal to 95% to facilitate the 5% the unused untestable legacy code, there are chances that the unused legacy code gets executed and the 5% holes may be in the important code. 100% code coverage provides advantages not only in reducing the bug count but also in making it easier to make significant changes to existing code base to remove uncover able areas like the unused legacy blocks in RTL code.

Dont Be Fooled By The Code Coverage Report

Highly covered code isn't necessarily free of defects, although it's certainly less likely to contain them. By definition, code coverage is limited to the design code. It doesn't know anything about what design supposed to do. Even If a feature is not implemented in design, code coverage can report 100% coverage. It is also impossible to determine whether we tested all possible values of a feature using code coverage. For example, randomization may not generate packets with all possible lengths, this cannot be reported by code coverage.. Code coverage is unable to tell much about how well you have covered your logic -- only whether you've executed each line/block etc at least once. Code coverage does not provide information about your test bench randomization quality and it does not report what caused the line execution/state transition etc. Analysis of code coverage require knowledge of design to find which features are not verified which is time consuming and out of scope of verification engineer. If the analysis is done at higher level of abstraction, it would be easier for the test writer to identify the missed serious which is not possible by code coverage. So if the code coverage is less than 100%, it means there is more work to do, if it is 100%, it doesn't mean that the verification is complete.

When To Stop Testing?

It's getting harder to figure out when to stop testing as the complexity of the protocol is increasing. In directed test environment, for each point mentioned in test plan, there will be a separate test case file. So if there are 100 points in test plan, then the engineer has to write 100 test case files. After writing and executing the 100 test case files, we can say that "all the points in test plan are verified" and we can stop testing.

FUNCTIONAL COVERAGE
In CRV, the each point in test plan is generated automatically. As there points are generated automatically, we need a mechanism which tells us that all the points in test plan are not exercised. When all the points in testplans are verified and the code coverage is 100% we can stop verification. What are the untested features? In Directed verification, there will be a separate testcase file for each feature to be verified. So to know how many features are verified, count the testcases. Verification is done when all tests are coded and passing alone with 100% code coverage. In constraint random verification all the features are generated randomly. Verification engineer need a mechanism to know the information about the verified features of DUT. SystemVerilog provides a mechanism to know the untested feature using functional coverage. Functional Coverage is "instrumentation" that is manually added to the TestBench. This is a better approach then counting testcases. Functional coverage is better than code coverage where the code coverage reports what was exercised rather that

what was tested.

Functional coverage answers questions like Have all the packets length between 64 to 1518 are used? Did the DUT got exercised with alternate packets with good and bad crc? Did the monitor observe that the result comes with 4 clock cycles after read operation? Did the fifos are filled completely? Summary of functional coverage advantages: Functional coverage helps determine how much of your specification was covered. Functional coverage qualifies the testbenchs. Considered as stopping criteria for unit level verification. Gives feedback about the untested features. Gives the information about the redundant tests which consume valuable cycle. Guides to reach the goals earlier based on grading. Introduction To Functional Coverage: SystemVerilog provides 2 ways to mention coverage. Cover group which is uses information from transactor, monitor and checker. Assertion coverage which is uses temporal language which can be outside or inside RTL code. Covergroup: There are three types of cover group points: 1. item functional coverage point 2. cross functional coverage point 3. transitional functional coverage point Item

"Item" is used to capture the information about the sclare value. A range of interested values can also be observed. For example, consider a packet protocol. The packet has a address field with possible values of A0,A1 and data which can be 4 to 10 bytes. At the end of packet parity is also attached for integrity checking. The following table identifies these coverage points for the above packet: Item_DL Data length 4 to 10 Item_ADD Address A0,A1 Item_Par Parity Good and Bad Coverage engine collects item values are during simulation and reports. Reports consists how many times Packets with length 4,5,6,7,8,9,10 are generated. Packets with good parity, bad parity are generated. Packets with address A0, A1 are generated. Cross "cross" is used to examine the cross product of two or more item coverage points. Example: verify DUT by sending both good parity and bad parity packets with all the address.

Cross_ ADD_Par Item_ADD,Item_Par

Coverage report consists how many times Packets with Address A0 with good parity are generated Packets with Address A0 with bad parity are generated Packets with Address A1 with good parity are generated Packets with Address A1 with bad parity are generated

Transitional

Transitional functional point is used to examine the legal transitions of a value. Example: Verify the DUT with incremental packet length from 4 to 10. Trans_length ( 4 => 5 => 6 => 7 => 8 =>9 =>10) Coverage engine reports whether this sequence is exercised or not.

Assertion Coverage: Assertion coverage looks for the desired behavior in RTL. It uses assertion language which is in temporal nature. It has direct access to design variables and designer can add many points in RTL which he wants the verification engineer to cover. Example: Verify by sending back-to-back packets. Functional coverage will be discussed in more detail in later chapters. Verification using functional coverage and randomization is called coverage driven constraint random verification.

COVERAGE DRIVEN CONSTRAINT RANDOM VERIFICATION ARCHITECTURE


Basic functionality of CDRV Environment: Input side of DUT : -- Generating traffic streams -- Driving traffic into the design (stimuli) Output side of DUT: -- Checking these data streams -- Checking protocols and timing Collecting both the functional coverage and code coverage information. Writing deterministic tests and random tests to achieve 100% coverage.

Verification Components Required For Cdcrv:

Stimulus Stimulus generator Transactor Driver Monitor Assertion monitor Checker Scoreboard Coverage Utilities Tests

Stimulus:

When building a verification environment, the verification engineer often starts by modeling the device input stimulus. In Verilog, the verification engineer is limited in how to model this stimulus because of the lack of highlevel data structures. Typically, the verification engineer will create a array/memory to store the stimuli. SystemVerilog provides high-level data structures and the notion of dynamic data types for modeling stimulus.

Using SystemVerilog randomization, stimulus is generated automatically. Stimulus is also processed in other verification components. SystemVerilog high-level data structures helps in storing and processing of stimulus in an efficient way.

Stimulus Generator

The generator component generates stimulus which are sent to DUT by driver. Stimulus generation is modeled to generate the stimulus based on the specification. For simple memory stimulus generator generates read, write operations, address and data to be stored in the address if its write operation. Scenarios like generate alternate read/write operations are specified in scenario generator. SystemVerilog provided construct to control the random generation distribution and order. Constraints defined in stimulus are combinatioural in nature where as constraints defined in stimulus generators are sequential in nature. Stimulus generation can be directed or directed random or automatic and user should have proper controllability from test case. It should also consider the generation of stimulus which depends on the state of the DUT for example, Generating read cycle as soon as interrupt is seen. Error injection is a mechanism in which the DUT is verified by sending error input stimulus. Generally it is also taken care in this module. Generally generator should be able to generate every possible scenario and the user should be able to control the generation from directed and directed random testcases.

Transactor

Transactor does the high level operations like burst-operations into individual commands, sub-layer protocol in layered protocol like PciExpress Transaction layer over PciExpress Data Link Layer, TCP/IP over Ethernet etc. It also handles the DUT configuration operations. This layer also provides necessary information to coverage model about the stimulus generated. Stimulus generated in generator is high level like Packet is with good crc, length is 5 and da is 8~Rh0. This high level stimulus is converted into low level data using packing. This low level data is just a array of bits or bytes. Packing is an operation in which the high level stimulus values scalars, strings, array elements and struct are concatenated in the specified manner.

Driver

The drivers translate the operations produced by the generator into the actual inputs for the design under verification. Generators create inputs at a high level of abstraction namely, as transactions like read write operation. The drivers convert this input into actual design inputs, as defined in the specification of the designs interface. If the generator generates read operation, then read task is called, in that, the DUT input pin "read_write" is asserted.

Monitor

Monitor reports the protocol violation and identifies all the transactions. Monitors are two types, Passive and active. Passive monitors do not drive any signals. Active monitors can drive the DUT signals. Sometimes this is also refered as receiver. Monitor converts the state of the design and its outputs to a transaction abstraction level so it can be stored in a 'score-boards' database to be checked later on. Monitor converts the pin level activities in to

high level.

Assertion Based Monitor

Assertions are used to check time based protocols, also known as temporal checks. Assertions are a necessary compliment to transaction based testing as they describe the pin level, cycle by cycle, protocols of the design. Assertions are also used for functional coverage. Data Checker

The monitor only monitors the interface protocol. It doesn't check the whether the data is same as expected data or not, as interface has nothing to do with the date. Checker converts the low level data to high level data and validated the data. This operation of converting low level data to high level data is called Unpacking which is reverse of packing operation. For example, if data is collected from all the commands of the burst operation and then the data is converted in to raw data , and all the sub fields information are extracted from the data and compared against the expected values. The comparison state is sent to scoreboard. Scoreboard

Scoreboard is sometimes referred as tracker. Scoreboard stores the expected DUT output. Scoreboard in Verilog tends to be cumbersome, rigid, and may use up much memory due to the lack of dynamic data types and memory allocation. Dynamic data types and Dynamic memory allocation makes it much easier to write a scoreboard in SystemVerilog. The stimulus generator generated the random vectors and is sent to the DUT using drivers. These stimuli are stored in scoreboard until the output comes out of the DUT. When a write operation is done on a memory with address 101 and data 202, after some cycles, if a read is done at address 101, what should be the data?.The score board recorded the address and data when write operation is done. Get the data stored at address of 101 in scoreboard and compare with the output of the DUT in checker module. Scoreboard also has expected logic if needed. Take a 2 inputs and gate. The expect logic does the "and " operation on the two inputs and stores the output".

Coverage

This component has all the coverage related to the functional coverage groups.

Utilities

Utilities are set of global tasks which are not related to any protocol. So this module can be reused across projects without any modification to code. Tasks such as global timeout, printing messages control, seeding control, test pass/fail conditions, error counters etc. The tasks defined in utilities are used by all other components of the TestBench.

Environment:

Environment contains the instances of all the verification component and Component connectivity is also done.

Steps required for execution of each component is done in this. Tests Tests contain the code to control the TestBench features. Tests can communicate with all the TestBench components. Once the TestBench is in place, the verification engineer now needs to focus on writing tests to verify that the device behaves according to specification.

PHASES OF VERIFICATION

Verification Plan

In test plan, we prepare a road map for how do achieve the goal, it is a living document. Test plan includes, introduction, assumptions, list of test cases, list of features to be tested, approach, deliverables, resources, risks and scheduling, entry and exit criteria. Test plan helps verification engineer to understand how the verification should be done. A test plan could come in many forms, such as a spreadsheet, a document or a simple text file. Sometimes, test plan simply reside in the engineer's head which is dangerous in which the process cannot be properly measured and controlled. Test plan also contains the descriptions of TestBench architecture and description of each component and its functionality.

Building Testbench In this phase, the verification environment is developed. Each verification component can be developed one by one or if more than one engineer is working it can be developed parallel. Writing the coverage module can be done at any time. It is preffered to write down the coverage module first as it gives some idea of the verification progress. Writing Tests

After the TestBench is built and integrated to DUT, it's time for validating the DUT. Initially in CDV, the test are ran randomly till some 70 % of coverage is reached or no improvement in the coverage for 1 day simulation. By analyzing the coverage reports, new tests are written to cover the holes. In these tests, randomization is directed to cover the holes. Then finally, the hard to reach scenarios, called as corner cases have to be written in directed verification fashion. Of course, debugging is done in parallel and DUT fixes are done.

Integrating Code Coverage Once you have achieved certain level of functional coverage, integrate the code coverage. For doing code coverage, the code coverage tools have option to switch it on. And then do the simulation, the tool will provide the report. Analyze Coverage

Finally analyze both functional coverage and code coverage reports and take necessary steps to achieve coverage goals. Run simulation again with a different seed, all the while collecting functional coverage information.

ONES COUNTER EXAMPLE


Following example is TestBench for ones counter. It has some verification components which are required, but not all the verification components discussed earlier. All the components implementation can be seen in further chapters with another protocol. Description of the language construct is discussed in further chapters, so don't pay attention to them. The intention of showing this example is to make you familiar with some steps required while building verification environment and to help you to understand the flow discussed above.

Specification:

Ones Counter is a Counter which counts the number of one's coming in serial stream. The Minimum value of the count is "0" and count starts by incriminating one till "15". After "15" the counter rolls back to "0". Reset is also provided to reset the counter value to "0". Reset signal is active negedge. Input is 1 bit port for which the serial stream enters. Out bit is 4 bit port from where the count values can be taken. Reset and clock pins also provided.

The following is the RTL code of onescounter with bugs.

module dff(clk,reset,din,dout); input clk,reset,din; output dout; logic dout; always@(posedge clk,negedge reset) if(!reset) dout <= 0; else dout <= din; endmodule module ones_counter(clk,reset,data,count); input clk,reset,data; output [0:3] count; dff dff dff dff d1(clk,reset,data,count[0]); d2(count[0],reset,~count[1],count[1]); d3(count[1],reset,~count[2],count[2]); d4(count[2],reset,~count[3],count[3]);

endmodule

Test Plan:

This is a simple testplan. Features to be verified are 1) Count should increment from "0" to "15".( Coverage item) 2) Count should roolover to "0" after "15".(Coverage transition) 3) Reset should make the output count to "0", when the count values is non "0". ( Assertion coverage)

Block Diagram:

Verification Environment Hierarchy

TOP |-- Clock generator |-- Dut Instance |-- Interface |-- Assertion block instance ( assertion coverage) |-- Testcase instance |-- Environment |-- Driver | |-- Stimulus | |-- Covergroup |-- Monitor |-- Scoreboard

Testbench Components:

Stimulus

Stimulus is a single bit value.

class stimulus; rand bit value; constraint distribution {value dist { 0 := 1 , 1 := 1 }; } endclass

Driver

This driver consists of reset and drive method. Reset method resets the DUT and drive method generates the stimulus and sends it to DUT. Driver also calculates the expected DUT output and stores in scoreboard. Coverage is also sampled in this component. Feature 1 and 2 which are mentioned in Testplan are covered in this cover group.

class driver; stimulus sti; Scoreboard sb; covergroup cov; Feature_1: coverpoint sb.store ; Feature_2 : coverpoint sb.store { bins trans = ( 15 => 0) ;}

endgroup virtual intf_cnt intf; function new(virtual intf_cnt intf,scoreboard sb); this.intf = intf; this.sb = sb; cov = new(); endfunction task reset(); // Reset method intf.data = 0; @ (negedge intf.clk); intf.reset = 1; @ (negedge intf.clk); intf.reset = 0; @ (negedge intf.clk); intf.reset = 1; endtask task drive(input integer iteration); repeat(iteration) begin sti = new(); @ (negedge intf.clk); if(sti.randomize()) // Generate stimulus intf.data = sti.value; // Drive to DUT sb.store = sb.store + sti.value;// Cal exp value and store in Scoreboard if(sti.value) cov.sample(); end endtask endclass

Monitor

The monitor collects the DUT output and then gets the expected value from the score board and compares them.

class monitor; scoreboard sb; virtual intf_cnt intf; function new(virtual intf_cnt intf,scoreboard sb); this.intf = intf; this.sb = sb; endfunction task check(); forever

@ (negedge intf.clk) if(sb.store != intf.count) // Get expected value from scoreboard and compare with DUT output $display(" * ERROR * DUT count is %b :: SB count is %b ", intf.count,sb.store ); else $display(" DUT count is %b :: SB count is %b ", intf.count,sb.store ); endtask endclass

Assertion Coverage

This block contains the assertion coverage related to 3 rd feature mentioned in testplan.

module assertion_cov(intf_cnt intf); Feature_3 : cover property (@(posedge intf.clk) (intf.count !=0) |-> intf.reset== 0 ); endmodule

Scoreboard

This scoreboard is a simple one which stores one expected value.

class scoreboard; bit [0:3] store; endclass

Environment:

Environment contains instances of driver, monitor and scoreboard.

class environment; driver drvr; scoreboard sb; monitor mntr; virtual intf_cnt intf; function new(virtual intf_cnt intf); this.intf = intf; sb = new(); drvr = new(intf,sb); mntr = new(intf,sb); fork mntr.check(); join_none endfunction endclass

Top:

The interface is declared and the test bench and DUT instances are taken. Testbench and DUT are connected using interfaces. Clock is also generated and connects it to DUT and testbench.

interface intf_cnt(input clk); wire wire wire wire clk; reset; data; [0:3] count;

endinterface

module top(); reg clk = 0; initial // clock generator forever #5 clk = ~clk; // DUT/assertion monitor/testcase instances intf_cnt intf(clk); ones_counter DUT(clk,intf.reset,intf.data,intf.count); testcase test(intf); assertion_cov acov(intf); endmodule

Tests:

This is a simple test case. It does reset and then send 10 input values.

program testcase(intf_cnt intf); environment env = new(intf); initial begin env.drvr.reset(); env.drvr.drive(10); end endprogram

After simulating with this testcase, the coverage report I got

(S)Total Coverage Summary

SCORE ASSERT GROUP 9.38 0.00 18.75 (S)Cover group report VARIABLE EXPECTED UNCOVERED COVERED PERCENT GOAL WEIGHT Feature_1 16 10 6 37.50 100 1 Feature_2 1 1 0 0.00 100 1 (S)Assertion coverage report: COVER PROPERTIES CATEGORY SEVERITY ATTEMPTS MATCHES INCOMPLETE Feature_3 0 0 13 0 0

This coverage report will be different if you simulate in your tool. To improve the coverage, than the 1st testcase , I wrote 2nd testcase with more input values and also logic related to 3 feature in the testplan.

program testcase(intf_cnt intf); environment env = new(intf); initial begin env.drvr.reset(); env.drvr.drive(100); env.drvr.reset(); env.drvr.drive(100); end endprogram (S)Download the phase 1 files:

ones_counter.tar Browse the code in ones_counter.tar

(S)Run the simulation: your_tool_command -f filelist test_1.sv your_tool_command -f filelist test_2.sv (S)Simulation Log Report: DUT count is 0xxx :: SB count is 0000 DUT count is 0xxx :: SB count is 0000 DUT count is 0000 :: SB count is 0000 DUT count is 0000 :: SB count is 0000 * ERROR * DUT count is 1111 :: SB count * ERROR * DUT count is 0111 :: SB count * ERROR * DUT count is 0111 :: SB count * ERROR * DUT count is 0111 :: SB count

is is is is

0001 0001 0001 0001

* ERROR * DUT count is 1011 :: SB count is 0010 * ERROR * DUT count is 1011 :: SB count is 0011 DUT count is 0011 :: SB count is 0011 DUT count is 0011 :: SB count is 0011 * ERROR * DUT count is 1101 :: SB count is 0100

VERIFICATION PLAN

The Verification Plan is the focal point for defining exactly what needs to be tested, and drives the coverage criteria. Success of a verification project relies heavily on the completeness and accurate implementation of a verification plan. A good plan contains detailed goals using measurable metrics, along with optimal resource usage and realistic schedule estimates. Verification plan gives an opportunity to present and review the strategy for functional verification before the verification engineer have gone into detail to implement it. It also establishes proper communication. Just imagine how it would be working with a multisite project and you have a query for which you have to wait till the next day to see the answer in email and they just call you while you are in sleep. It also gives an idea about the areas that are going to be difficult to verify for taking necessary steps. It is used to determine the progress and completion of the verification phase of verification. Verification Planning should start early with system/architecture evaluation phase. Once the functional spec is given to the verification team, they will start its development. A verification plan could come in many forms, such as a spreadsheet, a document or a simple text file. Templates are good if continually used in your company as it makes common interface for information, reviewer know where to look for certain information even in a huge document that he wants to know at this moment, because different reviewers want different infomation in different moments. Generally Verification plan development is divided in two steps: What to verify and How to verify? Step one: What to Verify? list of clearly defined features-to-verify. This is called feature extraction phase. Step two: How to Verify? After defining what exactly need to be verified, define how to verify them.

Verification Plan Contains The Following:

Overview Resources, Budget and Schedule Verification Environment System Verilog Verification Flow Feature Extraction Stimulus Generation Plan Checker Plan Coverage Plan Details of reusable components

Overview

This section contains description of the project, what specification is followed, what languages and methodology are used. Information related to the HW blocks, SW blocks and HW/SW interaction is outlined. Feature Extraction

This section contains list of all the features to be verified. Generally each feature is associated with

1. 2. 3. 4. 5.

Unique Name Id Description Expected result Pointer to the specification priority.

The "Unique name ID" should be descriptive in nature to understand the nature of the feature and should be unique. For example, <module name>_<sub module>_<feature number>_<feature name> Some points on how extraction the features: Read the MRD, System Specification, Macro and Micro Hardware Specification. Go through the designer notes/presentations. Annotate each line/paragraph specifying a functional item e.g., read/write a register Annotate each line/paragraph specifying a multiple-functional items e.g., steps required to set and cause an interrupt .

Identify all RTL configurations Identify interfaces and related protocols across interface. Identify standards compliance requirements, list corner cases. Create a list of illegal scenarios to verify. Create a list of exceptions to verify. Create a list of operation sequences to verify e.g., interrupt followed by breakpoint etc. Create a list of things to try and break the machine. Take advantage of existing plans. Use points from compliance checklist for standard protocols. Get information about the Common tests for all chips. Get review by a number of people, usually very experienced engineers. Better if you get reviewed by Architects, micro Architects, Leads, verification engineers, RTL Designers, software designers, marketing team and other team members.

Resources, Budget And Schedule This section contains details of man power required and schedule for each phase of the verification. Information about the tools which are used for simulation, debugger and bug tracking are listed. Verification Environment

A detailed TestBench architecture is essential for a robust verification environment. In this section describe the topology, about each component of the TestBench, special techniques that are used, IPs, Reused blocks, new blocks, and guidelines on how to reuse the TestBench components. For example if you think upfront about error injection, configuration, component communication, callbacks etc, you can provide hooks to do those.

System Verilog Verification Flow

Details about each level (block, sub-system, system) and phases (RTL, gate) of verification are mentioned. Stimulus Generation Plan

The stimulus generation section contains information about different types of transactions, sequences of transactions and various scenarios generated as per the specification. Each point is associated with

1. Unique name ID 2. Stimulus to be generated for driving into the DUT 3. Configuration/constraints Information

Checker Plan

This section will explain the expected result checking's in the TestBench. This can be done in monitor/checker. Various fields associated with each of the point are:

1. Checker Unique name 2. Unique Feature ID(Defined in Feature plan) 3. Checker Description

Coverage Plan

The coverage section explains the functional coverage of the features. A functional coverage plan should be built to help implement coverage points in the verification environment. Genarally it would be better if the coverage block is brocken based on the design logical blocks. It will list the various Coverage groups and assertion. Various fields of the coverage plan section are:

1. 2. 3. 4. 5. 6.

Coverage Group Coverage Description Coverage name Unique name ID (Defined in feature plan) Cover point (Items/cross/transition/assertion) Coverage goal

Details Of Reusable Components

This section contains the details of reusable components and the description about their usage.

You might also like