You are on page 1of 2

The IEEE has two definitions for “Quality”: ‫ בודקים‬, ‫קבוצת העבודה בסקראם מכילה מפתחים‬ Structured programming-Programming

ים‬ Structured programming-Programming paradigm aimed on Component, System, User Acceptance Test (UAT), determine its sub-ranges and transition points *if the
The degree to which a system, component, or process ‫ובעלי מקצוע אחרים ומנוהלת עצמאית באחריות‬ improving the clarity, quality, and development time of Regression dimension is not ordered, base the partitioning on similarity
meets specified requirements ‫משותפת בכל שבועיים עד חודש כל אחד יכול לראות‬ a computer program by making extensive use Unit test *Defined and written to test the smallest unit of *Lay out the analysis in classical boundary/equivalent table.
The degree to which a system, component, or process meets ‫תוכנה עובדת ולהחליט לשחרר אותה או להמשיך‬ of subroutines, block structures and for and while loops—in code (function, class, method)*Is done during the Identify best representatives *Create test for the
customer or user needs or expectations ‫ התקדמות של פרוייקט‬.‫לשפר אותה ספרינט נוסף‬ contrast to using simple tests and jumps such as implementation (or even before TDD)*By the developer consequences of the data entered, not just input filter
The ISO (ISO 8402) defines “Quality” as: ‫ אורך‬.‫סקראם מתבצעת ע"י סדרה של ספרינטים‬ the goto statement which could lead to "spaghetti code" *Small and independent, quick and simple*Stabilize the *Identify secondary dimensions. Analyze them in classical
‫ אורך קבוע מסייע לקבלת‬.‫ שבועות‬2 -4 ‫מקובל נע בין‬
The totality of features and characteristics of a product or which is both difficult to follow and to maintain. code*Test a specific portion of the code- sends a specific way *Summarize your analysis with a risk/equivalence table .
‫ במהלך הספרינט המוצר עובר את כל‬.‫קצב יותר טוב‬
service that bear on its ability to meet stated or implied ( ‫מובניות‬Structural) ‫של התוכנה‬ input to a method and verifies the results*Oracle is used to Generalize to multidimensional variables *Analyze
.‫ קידוד ובדיקות‬,‫ עיצוב‬:‫שלבי הפיתוח‬
needs ( ‫בדרך כלל מכילה הסתכלות ותכנון מלמעלה למטה‬Top down ) design unit test*Usually is automated*Gives the confidence independent variables that should be tested together.
Work is structured in cicles of work called Sprints, iterations
Philip B. Crosby defined “Quality as: ‫כאשר המפתח מגדיר סט של רוטינות ופונקציות המקודדות בנפרד‬ to do long-term development, support the refactoring of the *Analyze variables the hold results *Analyze non-
of work – 2 to 4 weeks, during each sprint, teams pull from a
We define quality as conformance to requirements. ‫שמשמעותו שהקוד יכול להיות מועמס לתוך הזיכרון בצורה יעילה‬ code.* Saves time, help prevent regressions from being independent variables. Deal with relationships and
prioritized list of customer requirements – user stories, so .‫והשימוש החוזר ברוטינות אלו הוא חלק חשוב בעיצוב התוכנה‬
Requirements must be clearly stated. Measurements introduced and released.* Provides excellent implicit constrains. Prepare for additional testing *Identify and list
that the features that are developed first are of the highest ( ‫עיצוב שכזה מקל על בדיקת הקוד כל רוטינה בנפרד‬Unit Test )
determine conformance….. Nonconformance detected is the documentaion because they show exactly how the code is unanalyzed variables. Gather information for late analysis
value for the customer. At the end a potentially shippable ‫והאינטגרציה ביניהם כתהליך בדיקה עצמאי‬
absence of quality designed to be used. *Imagine and document risks that don’t necessarily map to
product is delivered. In a procedural program, modules interact by reading and
Product view: the quality of a product is measurable in What to automate Not all the code should be automated* an obvious dimension. For even small programs, the input
KanBan development principles writing state that is stored in shared data structures.
objective manner; Framework helps* Unit tests are recommended* Repetitive domain is so large that it might as well be infinite. Testing is
Kanban is a method for managing the creation of products In an object oriented program, modules in the form of
User view: quality is fitness for use; tests (part of the regression)*Activities that are prone to fundamentally about choosing finite sets of values from the
with an emphasis on continual delivery while not objects interact by sending messages to other objects
Manufacturing view: quality is the result of the right human errors (checking detailed data)* Can’t be done input domain. Input parameters define the scope of the
overburdening the development team. Like Scrum, Kanban In a procedural program, the code is king and the data is
development of the product; manually (e.g., simulating 100 concurrent users )*Time input domain: *Parameters to a method *Data read from a
is a process designed to help teams work together more subordinate. In other words, you have programs which act
Value-based view (Economic): quality is a function of costs consuming tests file *Global variables *User level inputs. Domain for each
effectively. on data and they're not usually tightly bound.
and benefits Asserts types assertEquals- Very common*assertNotNull - input parameter is partitioned into regions. At least one
Visualize what you do today (workflow): seeing all the items In the OO world, objects are the primary thing of interest.
Verification of a product shows proof of compliance with Asserts that an object isn't null*assertNull - Asserts that an value is chosen from each region. If the partitions are not
in context of each other can be very informative An object consists of data and the code that is allowed to act
requirements—that the product can meet each “shall” object is null*assertSame - Asserts that two objects refer to complete or disjoint, that means the partitions have not
Limit the amount of work in progress (WIP): this helps on that data, and they are very tightly bound. It is the
statement as proven though performance of a test, analysis, the same object*assertNotSame - Asserts that two objects been considered carefully enough.
balance the flow-based approach so teams done start and concept of encapsulation, the hiding of information.
inspection, or demonstration (or combination of these). do not refer to the same object*assertTrue - Asserts that a Modeling the Input Domain: Step1: Identify testable
commit to too much work at once Naming convention is a set of rules for choosing the
Validation of a product shows that the product condition is true*assertFalse - Asserts that a condition is functions: * Individual methods have one testable function
Enhance flow: when something is finished, the next highest character sequence to be used for identifiers which
accomplishes the intended purpose in the intended false *In a class, each method often has the same characteristics
thing from the backlog is pulled into play denote variables, types, functions, and other entities
environment—that it meets the expectations of the Unit test coverage: 1. Code coverage, using dedicated *Programs have more complicated characteristics—
Scrum: Pre-defined roles of Scrum master, Product owner in source code and documentation.
customer and other stakeholders as shown through environment: Statement – 100% coverage, every statement- modeling documents such as UML use cases can be used to
and team member - Timeboxed sprints - Work is pulled CamelCase is the practice of writing compound words or
performance of a test, analysis, inspection, or line of executable code *Branches- logical branches n the design characteristics *Systems of integrated hardware and
through the system in batches (the sprint backlog)- No phrases in which the words are joined without spaces and
demonstration. code (e.g., if, while)*Path- testing all possible aths in the software components can use devices, operating systems,
changes allowed mid-sprint- Velocity- More appropriate in are capitalized within the compound like BreakFast,
Software quality (Definitions) code. 2.Data coverage: Boundary condition*Typical data hardware platforms, browsers, etc. Step 2 : Find all the
situations where work can be prioritized in batches that myName etc. CamelCase is so named because each word in
(1) The degree to which a system, component, or process values*Pre and post conditions* Illegal data 3. Model Driven parameters *Often fairly straightforward, even mechanical
can be left alone the identifier is put together without spaces, but with the
meets specified requirements. [IEEE_Std_610.12-1990] Test Design Coverage (MDTD),Find values, automate, run, *Important to be complete *Methods : Parameters and
KanBan: No prescribed roles- Timeboxed sprints- Work is first letter of each word captilised, looking like humps of a
(2) The degree to which a system, component, or process evaluate and report, considering test requirements (TR) and state (non-local) variables used *Components : Parameters
pulled through the system (single piece flow)- Changes can camel. There are two varieties in CamelCase
meets customer or user needs or expectations. test criterion (TC): Graphs*Logic expressions (based on e.g., to methods and state variables *System : All inputs,
be made at any time - Cycle time - More appropriate in - UpperCamelCase and lowerCamelCase. In java variables,
[IEEE_Std_610.12-1990] state charts, machine state charts)*Input domains including files and databases Step 3 : Model the input
operational environments with a high degree of references, method names are declared in lowerCamelCase
(3) Conformance to explicitly stated functional and Test data coverage: Check for functions with arguments/ domain *The domain is scoped by the parameters *The
variability in priority Object Oriented Metrics – Motivation
performance requirements, explicitly documented parameters*Did we cover all the data types*Valid and structure is defined in terms of characteristics *Each
RUP - Rational Unified Process Object-oriented design and development are popular
development standards, and implicit characteristics that are invalid data*Test boundary condition – maximum value, characteristic is partitioned into sets of blocks *Each block
a waterfall of iterations – each phase has a list of objectives. concepts in today's software development environment.
expected of all professionally developed software. minimum value, 0, positive value, and negative value as represents a set of values *This is the most creative design
Iterative development: business value is delivered Object oriented development requires not only a different
[Roger Pressman. Software Engineering: A Practitioner's applies for that data type*Test typical data values – most step in using ISP Step 4 : Apply a test criterion to choose
incrementally in timeboxed cross-discipline iterations. approach to design and implementation, it requires a
Approach. McGraw-Hill, 6 ed., 2004] common data based on domain and customer scenarios combinations of values *A test input has a value for each
Reliable software: different approach to software metrics.
(4) Set of systematic activities providing evidence of the (often get from customer)*Test pre- and post- conditions parameter *One block for each characteristic *Choosing all
‫תוכנה אמינה – אפשר לסמוך עליה תמיד – תבצע את‬ Thus, the approach to software metrics for object oriented
ability of the software process to produce a software .‫מה שהיא צריכה (ורק את זה) ותמיד‬ – looking at code check assumptions made such as some combinations is usually infeasible *Coverage criteria allow
product that is fit to use programs must be different from the standard metrics set. data being greater than 0 (division by zero)*Test bad subsets to be chosen Step 5 : Refine combinations of blocks
‫הבדיקות של אמינות התוכנה הן‬NFT – ‫מובהקות‬
Constraints of the iron triangle Coupling Metrics are the counts of interactions between data:*Illegal data values*No data*Too little or too much into test inputs *Choose appropriate values from each
‫ מה היא עושה ? אלא איך‬:‫זאת אומרת איננו שואלים‬
Scope is the work to be done – such as features and classes data*Uninitialized variables block. Two Approaches to Input Domain Modeling:
‫היא מצליחה לעשות תמיד את מה שהתחייבנו‬
functionalities – to deliver a working product. Main metrics: CBO (coupling between object classes) For a Test Driven Design (TDD): Performed Before Interface-based approach: * Develops characteristics
? ‫שתעשה‬
Resources include budget and team members working to class, is defined as the number of other classes to which it is implementation start*Similarly to UT, is written to test the directly from individual input parameters * Simplest
– ‫חלק מנושא האמינות הוא גם התאוששות מכשל‬
deliver and execute. )‫(מבחני הקפה‬ coupled. RFC (response set for class( The number of smallest unit of code usually a function, method or application *Can be partially automated in some situations
Time is when teams will deliver to the market such as Measures, Metrics, And Indicators methods in the set of all methods that can be invoked in class*Code is written to Pass the code*Iterative process. Functionality-based approach * Develops characteristics
releases and milestones. Measure Provides a quantitative indication of the extent, response to a message sent to an object of a class Why TDD? Programmers dislike testing: *They will test from a behavioral view of the program under test *Harder to
How to use it amount, dimension, capacity, or size of some attribute of a Cohesion Metrics measure how well the methods of a class reasonably thoroughly the first time*The second time develop—requires more design effort *May result in better
Assess all changes, risks and issues against the triangle and product or process are related to each other. Main metrics: LCOM (Lack of however, testing is usually less thorough*The third time, tests, or fewer tests that are as effective
weigh up your course of action against the impact on your Measurement The act of determining a measure Cohesion) Measures the dissimilarity of methods in a class well..*Testing is considered a “boring” task*Testing might be Software Requirements specification A formal crucial
critical objective. For example if the key project constraint is Metric(IEEE) A quantitative measure of the degree to which by instance variable or attributes. TCC (tight class cohesion( the job of another department / person*TDD encourages document that contains the following: *Purpose of the
cost, only the most business critical change requests are a system, component, or process possesses a given attribute Ratio of number of similar method pairs to total number of programmers to maintain an exhaustive set of repeatable software to be specified *Product/service description *
likely to be approved. However, if quality is the biggest goal Indicator A metric or combination of metrics that provides method pairs in the class. NP = maximum number of tests*Tests live alongside the Class/Code Under Test Architecture *Users characteristics *Assumptions *Business
time and cost might move to accommodate enhancement insight into the software process, a software project, or the possible connections. NDC = number of direct (CUT)*With tool support, tests can be run selectively*The high level requirements *Requirements for development
requests product itself connections.TCC = NDC/NP. TCC<0.5 considered non- tests can be run after every single change *Priority, estimation *Limitations *Functional/Non
Application lifecycle management Activities of a Measurement Process cohesive classes. What are the symptoms of code rot functional requirements. A good requirement *Correct
‫ההיבט הניהולי‬Governance - ‫ מטרת ההיבט היא להבטיח‬- Formulation The derivation (i.e., identification) of software Inheritance Related Measures Main metrics: DIT (depth of Rigid*Fragile*Unpredictable*Inseparability (for *Unambiguous (all statements have exactly one
‫ ההיבט‬.‫שהתוכנה תספק את צרכי הארגון שאליו נועדה התוכנה‬ measures and metrics appropriate for the representation of inheritance tree) Maximum inheritance path from the class modules)*Not reliable (hard to estimate changes) interpretation) *Complete (where TBDs are absolutely
‫הניהולי הוא ההיבט היחיד שמתפרש לאורך כל מחזור החיים של‬ the software that is being considered to the root class.The deeper a class is in the hierarchy, the What are the reasons Stupid managers?Impatient necessary, document why the information is unknown, who
‫ הוא ההיבט החשוב ביותר במחזור החיים‬,‫ במובנים רבים‬.‫התוכנה‬ Collection The mechanism used to accumulate data required more methods and variables it is likely to inherit, making it customers?Requirements changes?Stressed schedules?Go is responsible for resolution, and the deadline) *Consistent
.‫של תוכנה‬ to derive the formulated metrics more complex.NOC (number of children)Number of fast BUT go clean!Don’t blame-- > programmer writes the *Ranked for importance and/or stability *Verifiable (avoid
- ‫ההיבט הפיתוחי‬Development ‫ההיבט אחראי על פיתוח‬ Analysis The computation of metrics and the application of immediate sub-classes of a class.NOC measures the breadth code soft descriptions like “works well”) -testable *Modifiable
‫ זה מתחיל קצת אחרי הגאית הרעיון וממשיך בתחזוקה‬,‫התוכנה‬ mathematical tools of a class hierarchy, where DIT measures the depth. Depth is What is clean code? Like making Sushi*Elegant efficient-say (evolve the Requirements Specification only via a formal
‫ גם אחרי שלב ה‬.‫של התוכנה גם אחרי כניסתה לשימוש‬ Interpretation The evaluation of metrics in an effort to gain generally better than breadth, since it promotes reuse of a lot by only few words*Efficient –runs quickly*Can be read change process, preserving a complete audit trail of
Deployment, ‫ גרסאות‬,‫יש צורך בהמשך פיתוח התוכנית‬ methods through inheritance. High NOC has been found to as a good prose*Written by somebody who cares*Each changes) *Does not specify any particular design *Traceable
insight into the quality of the representation
‫ עדכון גרסה יכול לעלות יותר‬,‫ בתוכנות מסוימות‬.‫ עדכונים‬,‫חדשות‬ indicate fewer faults. This may be due to high reuse, which is routine to read is like you expected (cross-reference with source documents and spawned
Feedback Recommendations derived from the
‫ לכן יש לקחת בחשבון את העלות לעומת‬.‫מתועלת התוכנה לארגון‬ desirable.
interpretation of product metrics and passed on to the Test Driven Design Clean code gives confident, not afraid to documents).
.‫התועלת‬ ‫מה ניתן להבין כאשר מסתכלים על הקוד‬
software development team clean it *“Before you write code, think about what it will do. Re(f)= P(f)*C(f)
- ‫ההיבט התפקודי‬Operations ‫ל יישום פעיל חייב להיות נתון‬ ‫ אובייקטים כלולים בקלאס מוצהר‬,‫ מספר התפצלויות‬:‫כמויות‬ Risk Based Testing Re(f) - Risk
Attributes of Effective Software Metrics Write a test that will use the methods you haven’t even
‫ היבט זה מתחיל קצת לפני שלב ה‬.‫לניטור ולניהול‬Deployment LOC- line of code, #of object declared in source file, # of Exposure of function f
Simple and computable It should be relatively easy to learn written yet.”*A test is not something you “do”, it is
.‫ומתפרש עד שלב המוות‬ class declared in source file tested, data members in a *P(f) - Probability of a fault in function f *C(f) - Cost related
how to derive the metric, and its computation should not something you “write” and run once, twice, three times,
Software development life cycle - ‫עקרונות‬ class,complexity: cyclomatic complexity, reachability, to a fault in function f.
demand inordinate effort or time etc.*It is a piece of code*Testing is therefore
,‫ משתמ שים‬,‫ סיכונים‬,‫פרק הקונספט – מגבלות‬ looping depth. .‫ תעוד פנימי‬,‫ מכסה‬,‫ מובנה‬,‫מתאים לסטנדרט‬
‫רשימת מכולת‬ Empirically and intuitively persuasive The metric should “automated”*Repeatedly executed, even after small
( ‫מאיירס‬Glenford J. Myers ‫) מציע לנו לשאול עצמנו לסיכום‬ changes*Maintain short list of defects*TDD is a low level Model-based testing is an application of model-based
‫פרק הדרישות – פרק המה מסמך דרישות‬ satisfy the engineer’s intuitive notions about the product
:‫הבדיקה‬ design*TDD creates testable code*TDD eliminated the fear, design for designing and optionally also executing artifacts
‫ פרק האיך (כולל‬- ‫עיצוב‬/‫(פרק תכנון‬DFD ‫נעשה‬ attribute under consideration
Was the program easy to understand?
‫בד"כ בשני שלבים או יותר – חלק גדול מהקורס‬ Consistent and objective The metric should always yield creates the courage to change to perform software testing or system testing. Models can
Was the high-level design visible and reasonable?
.‫יוקדש לנושא‬ results that are unambiguous How to implement TDD? 3 laws 1.Write NO production be used to represent the desired behavior of a system under
Was the low-level design visible and reasonable?
‫פרק הקידוד – נעסוק רק באיך עושים את זה יותר‬ Consistent in the use of units and dimensions The code xcept to pass a failing test 2.Write only ENOUGH of a test (SUT), or to represent testing strategies and a test
Would it be easy for you to modify this program?
‫נקי משגיאות‬/‫טוב‬ mathematical computation of the metric hould use test to demonstrate a failure 3.Write only ENOUGH environment.
Would you be proud to have written this program? production code to pass the test TDD Stages: write a test-
‫ או ביידיש‬- ‫ בדיקות בפיתוח‬Unit test ‫מי‬ measures that do not lead to bizarre combinations of units Importance: *Unit testing wont be sufficient to check the
‫להשתמש בטכניקות הבאות לשם התמודדות עם הקוד‬
.‫שמתחמק אומר יעשו זאת המפתחים‬ Programming language independent Metrics should be compile-fix compile errors-run test, watch it fail-write code- functionalities *To ensure that the system is behaving in the
Code inspections using checklists
‫אינטגרציה – הפרק הנעלם שלוקח חלק ענק‬ based on the analysis model, the design model, or the run test, watch it pass-refactor code – run test, watch it same sequence of actions.*Model-based testing technique
‫מהמאמצים‬ Group walkthroughs
structure of the program itself pass, loop… has been adopted as an integrated part of the testing
‫בדיקות מערכת – מה שהארגון עושה על מנת‬ Desk checking
An effective mechanism for high-quality feedback The Mocking objects object -‫ מממש את אותו ה‬interface ‫של‬ process.*Commercial tools are developed to support model-
‫שיוכל למכור את התוכנה‬ Peer reviews
metric should lead to a higher-quality end product .‫ניתן לדמות את התנהגות גמישה של האובייקט אותו הוא מדמה‬ based testing.
‫בדיקות קבלה – מה שהלקוח עושה על‬ I say – find software to do the job!
Opportunities for measurement during the software life ‫האובייקט המדומה (זה לא‬stub*)"‫קביעת ערך "טיפש‬ Advantages: *Higher level of Automation is achieved.
‫מנת לקנות את התוכנה‬ .‫**החזרה‬.‫זריקת *השמת ערכים לפרמטרים‬exception.
cycle *Exhaustive testing is possible. *Changes to the model can
‫התקנה אצל לקוח – מה שהארגון והלקוח‬ Mocks and stubs are both dummy implementations of
Project initiation – measures from past projects are used to be easily tested. Disadvantages: *equires a formal
‫עושים בשביל לממש‬
estimate cost and duration of new project. Requirements objects the code under test interacts with.*Stubs can be specification or model to carry out testing. *Changes to the
‫תחזוקה ועדכונים – על זה עושים את‬
specification – requirements specified quantitatively, “the thought of as inputs model might result in a different set of tests altogether.
.‫הכסף‬
mean time to failure <180 days”. Project planning – to the code under test.*Mocks can be thought of as outputs *Test Cases are tightly coupled to the model.
‫מחזור חיי הפיתוח של תוכנה‬
1.the specification, design, development and testing a concrete plan specify performance goals in a measurable from the code under test. Stub-Class-Test. Class-Mock-Test. Regression testing for agile development For effective
way. Coding&UnitTesting – Measures of code complexity Test Types: Specification, Unit, Risk based, scenarious, regression testing in agile development, it is important that
software application. It covers the entire lifecycle from idea
conception through the development, testing, deployment, used to establish testing and maintenance plane. System regression, stress, user, model and state, load. a testing team builds a regression suite right from the initial
test – measures of product quality used to decide when the ‫ לפעמים‬.‫ פונקציה היא יכולת של המוצר‬.‫בדיקות פונקציונליות‬ stages of software development and then keeps building on
support and ultimately retirement of systems.
software is stable enough to ship. Operation – measure Cyclomatic Complexity reflects the decision-making ‫ מדרישות‬:‫ זיהוי כל פונקציה‬.‫נקרא לפונקציה גם תכונה או פקודה‬ it as sprints add up. A few things to determine before a
2. An umbrella term that covers several different disciplines
customer satisfaction. structure of the program. It is recommended that for any ‫או ממדריך משתמש (אפילו חלקי)*ממעבר לאורך כל ממשקי‬ regression test plan is built are: *Identifying what
that traditionaly were considered separately, including ….
given module the metric should not exceed ten. This value ‫המשתמש*מניסוי פקודות בכל שורות הפקודה* חיפוש בתוך‬ improvements must be implemented in the test-cases.
.‫ הקוד או קבצי המקור של פקודות או שמות של פקודות‬,‫התוכנית‬ *Identify the time to execute regression testing. *Outline
should be used as an indicator of modules which may
‫ קבע באיזה אופן תדע‬:‫פעילויות מפתח לבדיקת פונקציה‬
benefit from redesign. It is a measure of the size of the what needs to be automated in the regression test plan and
‫שהפונקציה פועלת (שאלת אורקל)*זהה משתנים המשמשים את‬
directed graph, how. *Analyze the outcome of the regression testing.
‫הפונקציה ובדוק את קצותיהם* זהה משתני סביבה שעלולים‬
Definition: The cyclomatic number V(G) of a graph G with n Regression testing is retesting changed segments of
‫להגביל את הפונקציה בתוך הבדיקה* בדוק שכל הפו נקציה‬
vertices, e edges, and p connected components is: v(G) = e - ‫מבצעת את מה שהיא אמורה לבצע ואינה עושה את מה שהיא לא‬ application system. It is performed frequently to ensure the
n +p ‫ מוכוון‬:‫ שימושים עיקריים בבדיקת פונקציונליות‬.‫אמורה לעשות‬ validity of the altered software. In most of the cases, time
Fan-in = number of ingoing dependencies ‫ ביכולתך להשוות את רשימת‬:‫להבטיח את כל התפקודים‬ and cost constraint is prominent; hence the whole test suite
Fan-out = number of outgoing dependencies ‫הפונקציות לרשימות בתכנית הבדיקות כולל רשימת תסריטים‬ cannot be run. Thus, prioritization of the test cases becomes
Heuristic: a high fan-in/fan-out indicates a high complexity .‫ תת רכיב של תכונה‬,‫ תפקוד‬,‫ומקרי בדיקה ע"מ לכסות כל רכיב‬ essential. The priority criteria can be set accordingly e.g. to
Microservice principles ‫ הכירות סימפטים עם‬:‫שימושי עבור בדיקות ראשונות של המוצר‬ increase rate of fault detection, to achieve maximum code
Many smaller (fine grained), clearly scoped services, Single ‫היכולות של המוצר*סריקה מהירה לזיהוי בעיות רציניות שצריך‬ coverage and so on. Sprint level Regression testing: This
Testing Techniques Responsibility Principle, Domain Driven Development, ‫ יופי‬: ‫ הסיכונים של בדיקות פונקציונאליות‬.‫לטפל בהן בהתחלה‬ regression test is focused on testing the new functionalities
Specification-Based: Equivalence Partitioning,Classification Bounded Context, Independently Managed, Clear ownership ‫של דרך להתחיל איתה אבל מספר אנשים עשויים להסתמך על כך‬ that are implemented since the last release. End to End
Tree Method,Boundary Value Analysis,State Transition for each service. ‫ כשיטה עיקרית לבדיקה הבעיה היא‬.‫כמהלך בדיקת מרכזי‬ Regression testing: This test incorporates end-to-end testing
The Linear Sequential model It is the oldest process models Testing,Decision Table Testing,Cause-Effect Graphing,Syntax Control Flow Graphs (CFG) A CFG models all executions of a ‫ חסרה‬:‫בהדגשת בדיקות יחידה תוך בידוד יחידה מסויימת‬ of ‘all’ the core functionalities of the product.
Testing,Combinatorial Test,Scenario Testing (including Use method by describing control structures, Nodes : Statements ,‫אינטראקציה בין רכיבים* חסרים ה בטים העוסקים בעומס‬ Pairwise and combinatorial approach All-Pairs technique is
in SE history Sometimes called the classic life cycle or the
Case Testing),Random Testing or sequences of statements (basic blocks), Edges : Transfers ‫ ולא נבחן נושא הפסקות‬,‫אינטרקציה עם מה שקורה ברקע‬ very helpful for designing tests for applications involving
waterfall model, the linear sequential model suggests a
Structure-Based: Statement Testing, Branch Testing, of control, Basic Block : A sequence of statements such that ‫ *לעתים קרובות מתמקד בתפקודים‬.)‫ועצירות (מתוכננות או לא‬ multiple parameters. Tests are designed such that for each
systematic, sequential approach to software development
Decision Testing, Condition Testing, Data Flow Testing ‫ללא התחשבות בגבולות או פרטי מידע אחרים * אינו משיב על‬
Prototypes development models The Prototyping Model is a if the first statement is executed, all statements will be (no pair of input parameters to a system, there are all possible
Experience-Based: Error Guessing, Exploratory, Checklist ‫מטלות המשתמשים – כאשר הלקוח יכול להשיג את היתרון‬
systems development method (SDM) in which a prototype branches), CFGs are sometimes annotated with extra discrete combinations of those parameters. The test suite
based ,Attacks. .‫המובטח ע"י תוכנה‬
(an early approximation of a final system or product) is built, information, branch predicates, defs, uses, Rules for covers all combinations; therefore it is not exhaustive yet
Static and Dynamic analytics: Static analysis is performed in Test Case is a finite and tree-structured labeled transition
tested, and then reworked as necessary until an acceptable translating statements into graphs … very effective in finding bugs.
system. A set of test inputs, execution conditions, and
prototype is finally achieved from which the complete a non-runtime environment. Typically a static analysis tool Testability is the quality that allows a component to be Optimization methods minimizing test volume Regression
will inspect program code for all possible run-time behaviors expected results developed for a particular objective, such
system or product can now be developed. This model works easily tested in isolation. Testability is a quality factor; it is a Test Selection (RTS) technique selects a subset of test cases
as to exercise a particular program path or to verify
best in scenarios where not all of the project requirements and seek out coding flaws, back doors, and potentially measurement or evaluation to predict the amount of effort from the original test suite and uses it to revalidate the
malicious code. Dynamic analysis adopts the opposite compliance with a specific requirement.* A test case is the
are known in detail ahead of time. It is an iterative, trial-and- required for testing and help in allocating required unmodified portions of a program: *Data-flow Analysis
combination of test data and oracle information to
error process that takes place between the developers and approach and is executed while a program is in operation. A resources. There is no clear definition of what aspects of Approach *Graph-Walk Approach *Modification-based
dynamic test will monitor system memory, functional determine the validity of the test. *A form of contract
the users. Step1 – determine objectives, step 2 – Identify software are actually related to testability . However, Technique *Slicing-based test Selection *Coverage-based
between a service provider and a service user .Design Test
Risks, Step 3 – development and testing, Step 4 – Plan next behavior, response time, and overall performance of the testability has always been an elusive concept and its Prioritization *Risk based prioritization *Cost-Aware Test
Cases: 1. Creating a MODEL of the requirements, using
iteration. system. This method is not wholly dissimilar to the manner correct measurement or evaluation is a difficult exercise. Case Prioritization * Coverage calculation Techniques Code
requirements tree*Model can be table, flow graph, and
‫הצגות אלטרנטיביות למודל העקרוני‬ in which a malicious third party may interact with an Reducing effort in measuring testability of Microservice and Coverage is a technique to measure how much the test
state diagram 2.Cover the model, using base TCs (no data in
The basic idea in Prototype model is that instead of freezing application. Having originated and evolved separately, static IOT is a must, in order to deliver quality software within time covers the software and how much part of the software is
and dynamic analysis have, at times, been mistakenly this stage)*Design the scenario (set of TCs) 3.Define the TEST
the requirements before a design or coding can proceed, a and budget not covered under the test. The tester is able to find out
ORACLE 4.Supplementing With TEST DATA 5.TCs contains set
throwaway prototype is built to understand the viewed in opposition. Test oracle in general means The mechanism that what features of the software are exercised by the code.
Strengths and Weaknesses of Static and Dynamic Analyses: of steps and expected results & conditions 6.Inspection &
requirements. This prototype is developed based on the determines success or failure of a test.Testing is a sequence Using the code coverage technique and number of bugs in
Analysis 7.Run & evaluate the results. Elements of test case:
currently known requirements. By using this prototype, the Static analysis, with its white box visibility, is certainly the of stimuli and response observations. Ground truth- is a total the application we can build confidence upon the system on
more thorough approach and may also prove more cost- title, priority, status, initial configuration, software
client can get an “actual feel” of the system, since the test oracle that always gives the “right answer”. its quality and functioning.
configuration, steps, expected behavior.Test technique is:
interactions with prototype can enable the client to better efficient with the ability to detect bugs at an early phase of Deterministic approach.Oracle tests: Derived, implicit,
the software development life cycle. For example, if an error Selection tool- what to test, sampling tool*Design tool- how
understand the requirements of the desired system. specified. Coverage criteria are method to measure how much code
to test, what to include in the test. Domain testing: Say two
Prototyping is an attractive idea for complicated and large is spotted at a review meeting or a desk-check. Had the Automated software testing :Able to run two or more has been exercised by the test suite. The test suite must
error become lodged in the system, costs would multiply. test cases A and B are test – equivalent If we believe
systems for which there is no manual process or existing specified test cases,Able to run a subset of the automated satisfy the rule of the coverage criteria. The various
software will fail on A if it fail on B. Test – equivalent should
system to help determining the requirements. Static analysis can also unearth future errors that would not test cases, No intervention needed after launching tests, Coverage criteria are as follows: * Statement Coverage *
emerge in a dynamic test. Dynamic analysis, on the other be an equivalent relation.Equivalent classes (EC ) for test
The prototype are usually not complete systems and many Automatically set-up and/or record relevant, Test Decision Coverage * Condition Coverage * Path Coverage
cases; set S of test case such as: If A ad B are in S the A and B
of the details are not built in the prototype. The goal is to hand, is capable of exposing a subtle flaw or vulnerability environment,Run test cases,Capture relevant Generating Test Sequences We can use the machine-
too complicated for static analysis alone to reveal and can are test- equivalent. If A is in S and A is a test-equivalent to B
provide a system with overall functionality. results,Compare actual with expected results,Report analysis then B is in S. Equivalent partition - for test cases: Set P of readable model to create test sequences: *Random walk
Start: requirement gathering-quick design-building also be the more expedient method of testing. A dynamic of pass/fail. *All transitions *Shortest paths first *Most likely paths first
equivalent classes such that every test case is in same class S
prototype-customer evaluation-refining prototype- (loop to test, however, will only find defects in the part of the code Test oracle challenges: Completeness of information: e.g., Testing artifacts The products developed into different
in P. Domain – set of values. Subdomain- equivalence
quick design) – Engineer product – stop. that is actually executed. inputs, specified results are usually quite abstract*Different phases of software testing life cycle and shared with the
classes. Representatives of each sub-domain. Equivalence
Prototypes development models (spiral) Procedural Programing is a programming paradigm, derived context may affect the expected behavior*Human Oracle stake holders are known as Test Artifacts. Generally the
from structured programming, based upon the concept of analysis- 2 values belong to the same class if the program
Evolutionary prototyping also called as breadboard Cost: writing test cases, evaluate test outcomes, software test team should prepare these artifacts and they
treats them similarly. Select the RISKY candidates. A schema
prototyping is based on building actual functional prototypes the procedure call. Procedures, also known as routines, execution*Accuracy of information: e.g., similarity to SUT, are supposed to take sign off on those artifacts from the
subroutines, or functions (not to be confused with for domain testing: Characterize the variables: * Identify
with minimal functionality in the beginning. The prototype arithmetic accuracy* Usability of the oracle: e.g., form of stake holders to make sure that there is no communication
the potentially interesting variables* Identify the variable(s)
development forms the heart of the future prototypes on mathematical functions, but similar to those used in results, data set size, compotators gap between customer and test team. Any change
functional programming), simply contain a series of you can analyze now, this is the variable (s) of interest
top of wich the entire system is build. availability*Maintainability: COTS or Custom*Complexity: requirement during the testing process can also be tracked
*Determine the primary dimensions of the variable of
Agile development principles computational steps to be carried out. Any given procedure e.g., domains, tools, test data easily through these test artifacts. Some of the deliverable
might be called at any point during a program's execution, interest *Determine the type and scale of variable’s primary
‫סקראם הוא תהליך אג'ילי שמאפשר לנו להתמקד‬ Using Oracles in Test Automation Automated verification test artifacts are presented below: Test strategy, test plan,
dimension and what values it can take *Determine whether
‫ערך עסקי גבוה בזמן הקצר ביותר בהפקה של‬ including by other procedures or itself. Traditional assumes that *we know what to check to know whether the test scenario, Test case, test reports.
procedural programming is far less dogmatic: you have data. you can order variable’s values (from the smallest to the
‫סקראם מאפשר לנו בזריזות ובמחזוריות לבחון את‬ software did what it was supposed to do.*We know the Verification Calls (VC) Definition: An external investigator
largest) *Determine whether this is an input variable or
)‫התוכנה במצב עובד (כל שבועיים עד חודש‬ You have functions. You apply the functions to the data. If correct answer entity that observes and documents selective states and
you want to organize your program somehow, that's your result *Determine how the program uses this variable
‫ הצוותים מנהלים‬.‫הגוף העסקי מספק סדרי עדיפויות‬ Testing levels &Types: Integrate test oracle with testing occurrences represented by data items during, and at the
‫את עצמם על מנת להבין כיצד לספק בצורה הטובה‬ *Determine whether other variables are related to this one.
problem, and the language really isn't going to help you. activities*Testing types : Functional testing (FT), Non end of, the TC execution, and signals their validity at a
‫ביותר את הדרישות בעלות העדיפות הגבוהה ביותר‬ Analyze the variable and create tests: *Partition the
Functional (NFT)*Testing levels: Unit, Integration, specific time purpose: Anticipates, controls and documents
variable (its primary dimension) *if the dimension is ordered,
certain behaviors of a TC execution Requirements: * Access volume specifications during the gathering phase *Testing . ‫ תקשורת טובה בתוך צוות הפרויקט‬,‫רק על ידי מנהל הפרויקט‬ outside world? What is my app’s main use case? Monitor the relevant application. * Lots of issues will be reported. *
to all types of storages and data items * Ability to deal with teams do not have access to the data sources *Delay in ‫ אין ניצול מלא של‬,‫חסרונות קשה לפתח מומחיות מקצועית‬ the mobile device and software market. Know when new Crowd testing is best suited to user-centric software
Zipped/Encrypted data * Application of runtime/dynamic giving production data access to the testers by developers ‫ חוסר‬, ‫ אין אופק תכנון מעבר לסיום הפרויקט‬,‫משאבים מקצועיים‬ phones will be rolled out. Find out about the new features applications. Con’s *Crowd testers are not generally testing
parameters * External to the tested application * *Production environment data may be not fully usable for .‫יציבות בארגון עקב מעבר אנשים בין פרויקטים‬ of the operating systems. Keep an eye on your target group experts. *The identity of the testers is sometimes unknown
Operational as API call * Ability to interface with reporting testing based on the developed business scenarios *Large ‫מבנה ארגוני מטריציוני מבנה ארגוני המשלב בין המבנה‬
to see if new devices are showing up in your statistics. Think *Bug reports may be of low quality. *Access to staging
system * Fast / Efficient * Can deal with complex logic and volumes of data may need in short period of given time ‫ כל פרויקט מנוהל על ידי מנהל‬.‫הפונקציונאלי למבנה הפרויקטאלי‬
‫ המקבל את המשאבים המקצועיים ממנהלי היחידות‬,‫פרויקט‬ twice before updating a phone to the latest operating systems can be very difficult due to legal hurdles, data
data structures *Data dependencies/combinations to test some of the
,‫ ניצול יעיל של משאבי הארגון‬,‫ יתרונות מקצועיות‬.‫המקצועיות‬ system version. privacy and security. * Crowd testing can take a long time to
Traditional Testing levels unit < component< subsystem< business scenarios *The testers spend more time than
‫התאמה לסביבה דינאמית שבה צוות מצומצם בכל תחום משרת‬ Consideration while developing and Testing Mobile apps prepare.
system * required for communicating with architects, database ‫ דורש תיאום בין‬,‫ חסרונות מורכב לניהול‬.‫את כל הפרויקטים‬ Gather information about your possible target group. *Ask
System = ∑ 𝑆𝑢𝑏𝑠𝑦𝑠𝑡𝑒𝑚𝑠 {∑ 𝑐𝑜𝑚𝑝𝑜𝑛𝑒𝑛𝑡𝑒𝑠 [∑ 𝑢𝑛𝑖𝑡𝑠]} administrators and BAs for gathering data *Mostly the data ‫ תקשורת‬, ‫ אנשי צוות הפרויקט כפופים לשני מנהלים‬,‫גורמים רבים‬ the customers about their needs. What are the app needs to
Differentiate testing types into dimension levels is created or prepared during the execution of the test ‫ למנהל הפרויקט קשה לפקח‬, ‫מורכבת בין גורמים מתחומים שונים‬ :‫טבלת החלטות‬
1.Technical / Code levels – signify the actual code *Multiple applications and data versions *Continuous solve a problem for the user. Usability is really important.
‫על האיכות המקצועית של הביצוע‬
development & implementation during the SDLC progress release cycles across several applications *Legislation to look The app needs to be reliable and robust. App performance is
2.Functional / Business Levels signify the usage of the after Personal Identification Information (PII) crucial. Apps need to be beautiful. Mobility and Networks
application from Separate microservices conglomerates into Strategies for Test Data Preparation creation of flat files Mobile Apps Types
services that must “talk” to one another and should be based on the mapping rules-> data from production Native Apps Developed for use on a particular platform or
applied within the usage context 3.Nonfunctional levels are environment ->retrieving sql queries that extract data from device. Coded in a specific programming language, such as
all the extra’s which can be described with the question of client’s existing data bases -> automated test data Objective C for OS and Java for Android. Provide fast
how? and how well? it is being executed generation tools ->loop. Test Data Generation activities: performance and a high degree of reliability. Have access to
DEFECT Anomaly: Any condition that deviates from data analysis, data matching, test data update, sensitive a phone’s various devices, such as its camera and address
expectation based on requirements specifications, design data, automation, warehouse.
book. Users can use some apps without an Internet
documents, user documents, standards, etc. or from basic needs for test management requirement management
connection. Web Apps Stored on a remote server and
someone’s perception or experience. Anomalies may be tool, configuration management tool, test execution tool,
delivered over the internet through o browser. Are not real
found during, but not limited to, reviewing, testing, analysis, incident management tool – all of these are test
compilation, or use of software products or applicable management tool. apps; they are really websites that, in any ways, look and
documentation. [IEEE 1044] See also bug, defect, deviation, continuous innovation (CI) continuous integration feel like native apps. They are run by a browser Hybrid Apps
error, fault, failure, incident, problem.Defect: A flaw in a Continuous Integration is a software development practice Are like native apps, run on the device, and are written with
component or system that can cause the component or where members of a team integrate their work frequently, web technologies. Hybrid apps run inside a native container,
system to fail to perform its required function, e.g. an usually each person integrates at least daily - leading to and leverage the device’s browser engine (but not the
incorrect statement or data definition. A defect, if multiple integrations per day. Each integration is verified by Agile - > CI Acceptance test driven development process. browser) to render the HTML and process the JavaScript
encountered during execution, may cause a failure of the an automated build (including test) to detect integration Tight collaboration between business and delivery teams.
locally. A web-to-native abstraction layer enables access to
component or system. Other related definitions: defect errors as quickly as possible. Many teams find that this Cross-functional teams include QA and operations.
device capabilities that are not accessible in Mobile. Web
based technique:, defect density: Defect Detection approach leads to significantly reduced integration problems Automated build, testing, DB migration and deployment.
Incremental development on mainline with continuous applications, such as the accelerometer camera and local
Percentage (DDP): defect management tool: defect masking: and allows a team to develop cohesive software more
integration. Software always production ready. Releases tied storage.
defect report: defect taxonomy: defect tracking tool: rapidly
to business needs, not operational constraints. Mobile Apps Types NATIVE – PROS & CONS
Software Errors : What Kind of Error? Coding Error: The continuous deployment (CD) means that every change goes
‫ להבין מה מצפים‬:‫העצמת עובדים – עקרונות מה צריך עובד‬ Pro’s Due to all of the app’s elements being included in a
program doesn’t do what the programmer would expect it through the pipeline and automatically gets put into
‫ לכל‬,‫ממנו בכל שלב* יעדים ברורים להשגה (בתקופת זמן מוגדרת‬ single native package, native applications tend to have fast
to do. Design Issue: It’s doing what the programmer production, resulting in many production deployments every
‫ *מה האפשרויות להתקדמות ומה צריך לעשות בשביל זה‬.)‫תפקיד‬ graphics with fluid animations built in. Native applications
intended, but a reasonable customer would be confused or day. Continuous Delivery just means that you are able to do
‫ מה רוצה‬.)‫*איפה הוא עומד היום (איך רואים אותו מנהלים‬ can access exclusive ative APIs in the phone’s operating
unhappy with it. Requirements Issue: The program is well frequent deployments but may choose not to do it, usually ‫המעביד‬ DevOps: Developers vs Operation: Developers: Need to
designed and well implemented, but it won’t meet one of due to businesses preferring a slower rate of deployment. system such as push notifications, camera, and in-app
‫להכיר את עובדיו *לברור את הטובים מהם על מנת לקדמם‬ wait weeks in order for there work to be placed in
the customer’s requirements. Documentation / Code benefits of continuous delivery are: * Reduced Deployment ‫(עתודות העתיד של הארגון) *לברור את הפחות טובים על מנת‬ purchases, which would otherwise be prohibited, or
Mismatch: Report this to the programmer (via a bug report) Risk: *Believable Progress *User Feedback. Develop- provided in a cumbersome manner on a mobile web production. *Hard to manage the code that are pending in
‫ יחסי‬:‫לשחררם *להשיג מהם תפוקות ברורות מה כולם רוצים‬
and to the writer (usually via a memo or a comment on the >Automated build->deploy to dev server ->automated .‫ לקדם את החברה‬,‫ תוצרת ויעילות‬,‫עבודה טובים‬ application. If you’re developing a native application for production and developing new features. Operations: Code
manuscript). Specification / Code Mismatch: Sometimes testing -> deploy to QA -> Manual/performance testing -> ‫ מאוד קשה‬:‫מה עושה המעבר למודל אג'ילי למסלולי קידום‬ iPhone, they have many resources, development tools, and that's works on the dev env sometimes not working on the
the spec is right; sometimes the code is right and the spec deploy to Prod. ‫לממש עבודה באג'ייל ללא מבנה מטריציוני *יש קושי בקידום‬ reading material to help you out. Con’s If you intend to also production env * Responsible for checking that the code
should be changed. Traditional to Agile Transfer: Step 1: accept agile thinking. ‫מקצועי (רמת הידע והמקצועיות של העובד היחיד) * מאוד קשה‬ publish your app to a different app store, your application that been deploy doesn’t cause any errors to the production
Defects measurement dimension A software bug arises Enterprise Stakeholders: Product Manager, Program ‫לזהות כישורי ניהול אמתיים בתוך קבוצת העבודה – ניגוד עניינים‬ system.
will need to be rewritten in order to be a native app on
when the expected result don't match with the actual Manager, CEO, CTO, IT Director/VP. Agile Product ‫* מודל הניהול החדש עובד בעיקר בקבוצות קטנות ואחידות‬
‫*למרות האחריות הכוללת של הצוות בזמן הפרויקט – האחריות‬ another mobile OS. This usually delays features for the next
results. It can also be error, flaw, failure, or fault in a Stakeholders: Product Owner. Product backlog -> sprint
‫על קידום העובד מוטלת בעיקר על עצמו‬ platform in development. It is a time consuming process to ‫ שבה משולבים בתהליכי‬,‫ זוהי גישה לפיתוח תוכנה‬:‫הגדרה‬
computer program. Most bugs arise from bad requirements. backlog->(scrum master, scrum team) ->deployable feature.
Following are the common types of defects that occur Ops, PS&M, infra team - ??? Step 2: adapt continues ‫מודל ניהול מודרני – בואו נשטח את ההיררכיה *להתאים את‬ create a native app for both iOS and Android, as well as ‫ בודקי האיכות וכן האנשים‬,‫ המפתחים‬,‫ מזמיני הפרויקט‬,‫הפיתוח‬
during development: Arithmetic Defects, Logical Defects, pipeline. ‫המבנה האירגוני והמסלולים האישיים לעבודה בשיטות אג'יליות‬ money consuming. Native platforms define their own rules ‫ שילוב פעולה הדוק‬.‫שיטמיעו את התוכנה לאחר מכן בארגון‬
Syntax Defects, Multithreading Defects, Interface Defects, Generalization and trends The move to CI Project is a large ‫*הקשיחות המבנית של הדרגות בארגון פוגעת ומסרסת‬ and frameworks and inherit little from other disciplines, ‫ מאפשר לקצר באופן משמעותי‬,‫ומתמשך לאורך כל מהלך הפיתו ח‬
Performance Defects organizational project which requires management support ‫**לאפשר שכל אחד יוכל לבטא עצמו גם ניהולית וגם מקצועית‬ requiring more investment. Native applications typically ‫ לוודא כי התוצר אכן עונה במדויק‬,‫את זמן הפיתוח של היישומים‬
Defects Root cause and timing Root Cause Analysis is like a *Automation is a must - with out it no way to perform *Unit ,‫ תפקידים‬:‫*לשמר ולטפח עובדים מצטיינים * מעבר גמיש בין‬ require you to define phones and tablets separately, or ‫ להבטיח שמדובר בתוצר אמין וכן כזה שיהיה‬,‫לדרישות המזמין‬
chain of events which go backward, right from the last testing level are playing important part for achieving the ‫ מיקום גאוגרפי‬,‫ אופי עבודה‬,‫מקצועות‬ ‫ מאחר ומדובר בגישה שמתבססת‬.‫קל להטמיעו ולשלבו במהירות‬
define individual layouts. While this step is available and
Cognitive processes Cognition is an interdisciplinary topic
possible action to the previous and so on, till you reach the desired quality.* Even thou the desired formation of the repeated for web apps ‫ היא מאפשרת גמישות רבה וכן‬,‫גם על אוטומציה ופתרונות בענן‬
start of the problem and the exact point at which it was needed infrastructure could be specify. Hardly any single and its study is generally found within “cognitive science” *A
Mobile Apps Types hybrid: Pro’s: Hybrid mobile apps don’t . ‫חיסכון כספי‬
mental process that is applied to build and use knowledge
introduced as a defect. Possible Factors responsible for tool provides the full solution. Each company assemble its have that “mobile web” browser look because they can
defect occurrence: Inadequate Requirements, Lack of Unit own tool selection and integrate all of them. * The *Software testing is a cognitive process of analysis, research
and decision making *Developers claim to write unit tests include native hardware features.*The content of a hybrid Agile: Addresses the gap between customer requirements
testing, Communication gap, Negligence, Incorrect responsibility for quality transferring to development teams
systematically and to measure code coverage *Human app is portable and just requires a native harness to run it. and development team. Devops: Address the gap between
Assumption, Design Gaps, Deployment issue, Inappropriate * Testing is becoming an activity done daily by programmer
decisions –oracle definition, design test ware, tools, *Developers have the option to package the app locally or development and operations
Environment, Ineffective Test case coverage, Ineffective Test * The measurement are not yet matures * The economic
data benefit of the different testing levels is yet to be formalized. equivalent testing, parameters… *Developing testing through a server, which provides access both online and
A Bug Workflow *You find a bug, investigate it, and report * The industry galloping forward without the academic competence with cognitively complex skills requires mastery offline. Con’s: Automatic generation may not work on all
of routine tasks *Selection testing tool- what to test, devices, which can get especially complicated when trying to ‫ שיטת פיתוח‬-‫אופס הוא חלק חשוב במתודולוגית אג'ייל‬-‫דב‬
it. *A programmer looks into it and fixes it, or decides that a support. *It is hard to see a theoretic justification and
fix will take so long - it needs management approval, support for the new trends in academic literature. sampling tool *Design tool- how to test, what to include in accommodate to different Android phones. *Several ‫ שינוי המקנים יתרון‬-‫מואץ שמאפשרת תהליכים מהירים ברי‬
recommends that it be deferred (fix it later), or determines Conclusion and next step: * The CI / CD trends are affecting the test vendors have started offering build-platforms for hybrid ‫ עם יכולת לשינויים בדרישות על פי‬,‫משמעותי בתחרות הלקוח‬
(argues) it is not a bug. *The project manager prioritizes the testing levels. Eliminating the boundaries between the Human factors and testing A hunting metaphor for human ,‫דחיפות בזמן הפיתוח ואינטגרציות מהירות ותמידיות על כל שינוי‬
frameworks, simplifying the build knowledge that was
factors in testing (Prado, 2015) *The goal is to reveal failures ‫כאשר המטרה המרכזית הינה אספקה של תוכנה עובדת באופן‬
unfixed bugs and may reassign them. *The project team deferent levels and force a single unit to perform all of them previously required for multi-platform. Just be prepared to
(with representatives of the key stakeholder groups) reviews *Unit test is becoming one of the milestones for achieving it by transforming the testing problem into feasible automatic ‫ כדי לממש את השינויים‬.‫ עם שאיפה ללוחות זמנים קצרים‬,‫תדיר‬
pay for it. *If the App Store is able to recognize that
(or semi-automatic) solution *The difficulty in writing new ‫ פותחו‬,‫ תוך שמירה על שלמות המוצר בכל רגע‬,‫בקצב תדיר‬
deferred bugs and may reprioritize unfixed bugs. *The test *key success factor is the ability to automate all previously application is not truly native, it may be denied from the App
group retests bugs marked as fixed, deferred, or manual activities *Agile development teams are required to tests correspond to (Daka and Fraser, 2014) *the ‫שיטות עבודה וכלים חדשניים שתומכים בתהליכי‬Continues
identification of which code to test and *the isolation of the Store. *If your app can’t be published on the App Store, then
irreproducible and closes them, or adds new information to perform new activities that were done by dedicated teams
unit under test *Difficulty in the evaluation of the unit that would reduce your monetization and distribution
them and ask that they be reworked or reconsidered. It in the past. * New testing consideration should be addressed
testing (Runeson, 2006) *functional approaches rely on potential since purchase price or in-app purchases are native ‫ניהול סיכונים – דוגמאות לסיכון בפרויקט המערכת לא עוברת‬
turns some of these into appeals to the project team. *This by the teams such as – performance, security, deploy ability,
specifications to derive and evaluate the testing features. ‫את מבחני האינטגרציה * המתכנת הראשי עזב את הצוות * איחור‬
is a simplified description. Workflows vary significantly regression
requirements *functional testing is more prone to Mobile Apps Performance Testing definitions : Performance ‫בהספקת רכיב קריטי * שרפה באתר העבודה * תביעה משפטית‬
:‫ טיפול בשגויים‬:‫ דרגות חומרה‬Show stopper, Critical, The story of Atomation Atomation offers an IoT platform for
interpretation and subjectivity than other techniques *Was Testing is in a key factor in the to be or not be of the Mobile – ‫ ניהול סיכונים‬.‫על הצבת אנטנת שידור * המזמין איבד עניין‬
Medium, low. Urgency levels: Urgent, immediate, Soon, No making any object intelligent and connected. The platform
the hunter left out…?? App *In software engineering, performance testing is in ‫ מרכיב חדש גדול * ההיקף של חלק‬: ‫רמזים לקיום סיכונים‬
rush. Status: ne- new, in-investigate, as – assign, on – on consists of small, connectable hardware along with big data
hold, fi – fixed, re – retest, can – canceled, d – defer, cl – aggregation and analytics via the Cloud. *The hardware, an Cognitive framework Tools Interaction usability Problems -- general, a testing practice performed to determine how ‫מהעבודה (או כולה) לא לגמרי מוגדר * חלק מהדרישות יגיעו‬
closed. Permissions: tester: test, open; team leader: test, "atom," contains a circuit board with embedded code, which > usability and applicability of testing tools *Clarity, a system performs in terms of responsiveness and stability ‫במהלך ההתקדמות בפרוייקט * צוות חסר ניסיון * ריבוי שינויים‬
open, status change, close; developer - fix, change status; has its own power source, Bluetooth connectivity with other documentation, intuitiveness, simplicity, compatibility, ‫ תוכנית העבודה לא‬:‫ סיבות עיקריות לכישלון פרוייקט‬.‫בתכנון‬
under a particular workload. It can also serve to investigate,
manager – test, open, fix, close, change status; customer atoms and smart devices, and connectivity to the Cloud. feedback, GUI auxiliary or aesthetic (Norman, 1994) ‫ חלק מהפעילויות פשוט נשכחו‬,‫כללה את כל הפעילויות הנדרשות‬
measure, validate or verify other quality attributes of the
presentor – test, open; appellation commission – close, Atoms connect via a plug. Atomation also gathers big data *Cognitive framework- three dimensions to support unit ‫* שינויים משמעותיים בתכולה נוספו במהלך הפרוייקט * לא בוצע‬
system, such as scalability , reliability and resource usage.
change status; from atoms, performs analysis, and provides feedback to testing (Prado, 2015) *It is possible to classify cognition into ‫ניתוח סיכונים אשר היה משמש כלי לאיתור פערים בתכנית‬
experiential and reflective: 1) Experiential cognition is the *Performance testing can verify that a system meets the
customers through the Atomation dashboard. *Atomation's ‫העבודה * בפעילויות שנרשמו יש פער ממשי באמידת המשאבים‬
type of mental effort spent during automatic tasks such as specifications claimed by its manufacturer or vendor. The ‫ שינוי‬:‫ הקטנת ההסתברות‬:‫והזמן לפעילויות טיפול בסיכון‬
platform has important agricultural applications, including
talking, riding a bicycle, cooking the every-day meal etc. process can compare two or more devices or programs in ‫בתכולת העבודה (לא כל הביצים בסל אחד) * שימוש בטכנולוגיה‬
recommendations for optimal growth, atering and
2)Reflective cognition requires much more concentration terms of parameters such as speed, data transfer rate ‫ הכנת‬:‫ צמצום הנזק‬.‫חליפית * ביצוע ניסויים להקטנת אי וודאות‬
fertilization guidelines, future predictions, and farm
and is related to decision making, creativity, and reasoning ,bandwidth, throughput, efficiency or reliability ‫אמצעי מיגון לפני האירוע (תכנית מגירה) * הכנת עתודה‬
automation.
Internet Of Things simplification Internet of Things can be Knowledge management in software testing The aim of a *Performance testing can also be used as a diagnostic aid in ‫) *העברת ה סיכון לגורם אחר‬...‫ כסף‬,‫ זמן עודף‬,‫(משאבים‬
KM program is to enable individuals to solve problems more locating communications bottlenecks. Often a system will ‫ *העברת המשימה לגורם אחר אשר אצלו הסיכון נמוך‬.)‫(ביטוח‬
realized in three paradigms – internet-oriented
(middleware), things oriented (sensors) semantic-oriented efficiently (Andrade et al., 2013) *transfer knowledge work much better if a problem is resolved at a single point or ‫ פעולה מונעת – פעילות שיש לבצע‬:‫ דרכי התמודדות‬.‫יותר‬
(knowledge). Although this type of delineation is required existing at the individual level to the organizational level *to in a single component ‫ מטרתה להפחית את‬,‫ בד"כ במועד מוקדם ככל האפשר‬,‫מראש‬
due to the interdisciplinary nature of the subject, the explicit the knowledge gained from experiences *Lessons ‫ פעולת‬.)‫הסיכוי להתממשות הסיכון (הקטנת סבירות הסיכון‬
usefulness of IoT can be unleashed only in an application learned (LL) are KN enablers. ‫שיכוך – פעולה שיש לבצע בעת התרחשות הסיכון ומטרתה‬
Mobile Apps Performance Testing challenges Prevalence of
domain where the three paradigms intersect The promise – Human skills in testing Creativity *Intelligence *The ability ‫להפחית את הנזק שיגרם בעת התרחשות הסיכון (הקטנת חומרת‬
customer protocols Rapid scalability *Lack of mobile
will it be? Abstract away the specificities of physical devices to learn and adapt to new situations *The ability to ‫ פעולה מתקנת – פעילות שיש לבצע אחרי שהסיכון‬.)‫הנזק‬
efficiently recognize problems *Most new bugs are found by monitoring solutions *Lack of diagnostic tools *Time to
from applications and/or users *Key elements to promote – ‫ העברת סיכון‬.‫התממש במטרה להחזיר את המצב לקדמותו‬
humans testing the software *Exploration can bring direct market *Selection of load testing tool *Test environment
interoperability and seamless integration of physical devices ‫ קבלת‬.‫העברת אחריות על הסיכון ועל הטיפול בו לגורם אחר‬
utilization of knowledge and learning (Itkonen,2016) *High concurrency
*Contribute to make the development of IoT applications .‫ לקיחת "סיכון מחושב" ולא ל נקוט בשום פעולה‬- ‫סיכון‬
UTS – Design And Perspective Entity Model – Best practice Exploration testing (ET) Exploratory testing is neither black Mobile Apps Performance Testing strategies Test in
easier *Recent research field that has drawn attention from ‫ נקבע ונתעד‬,‫ בתהליך זיהוי הסיכונים נאתר‬:‫זיהוי סיכונים‬/‫איתור‬
• ‫תמיכה מוחלטת של ההנהלה‬ nor white, but rather a continuum of exploration exists Production environment *Test for 2-3 times expected
• ‫תשתיות אוטומציה מוטמעות בארגון‬ industry and academia - I doubt that ‫ לצורך הזיהוי ניתן‬.‫אילו סיכונים עלולים להשפיע על הפרויקט‬
Future based around microservices Same say the future will *Based on the personal freedom, and the experience and capacity *Test accurately for the mobile environment *Test ‫ טכניקות‬,‫ הנחות‬,‫ חוות דעת מומחים‬:‫להיעזר בכלים כגון‬
• ‫צוותי עבודה בין תחומיים‬ skills of the tester, there is a lack of test charter design for ET
be based around microservices. These are discrete pockets across geographies *Test with Real time results ‫ * התוצר של תהליך זה הוא רשימת‬.‫ ניתוח הנחות ועוד‬,‫דיאגרמות‬
• ‫מערכת דיווח כלל אירגונית‬
of functionality, but carry out single functions, rather than *Simultaneous learning, test design and test execution *Five Mobile Apps AUTOMATION Automate the business critical ‫ * יש לחפש סיכונים על בסיס תוכנית העבודה‬.‫סיכונים‬/ WBS /
• ‫מעבדה להרצה ממכונת של התסריטים‬
levels exist: 1.Fully scripted: the tester is provided with the parts.*Automate user workflows and scenarios. *Automate
• ‫שנוי כוח אדם ומקצועי‬ trying to take on too much. As a process is broken down into : ‫ הלקוח‬.‫ מערכות ותהליכים עסקיים‬/ ‫גאנט ונתיב קריטי‬
its constituent parts, you are left with a set of tasks – and it test steps, but also with the test data, which does not only complex app scenarios. * Automate sequences that
• ‫מעורבות הלקוח (פנימי וחיצוני‬ ‫ העם‬:‫פרמטרים המסעיים בזיהוי סיכונים גנריים הקשורים ללקוח‬
• ‫תהליך הטמעה מסודר לכל הארגון‬ is surprising how often these tasks appear in other provide room for exploration steps. 2.Low degree of needs to be repeated several times. * Automate only the * ?‫עבדנו עם הלקוח בעבר? * האם הלקוח יודע מה הוא רוצה‬
Test Plan The purpose of the Test Plan is to outline and processes. *Functions, Also, should a function need exploration: Besides the information for medium degree of
acceptance criteria. * Support the development team in ‫האם הלקוח מוכן להקדיש זמן משמעותי לתהליך איסוף‬
communicate the intent of the testing effort for a given updating, this can be done in one place, and all dependent exploration the tester is also required to follow certain test
providing fast feedback. * Build up a regression test suite. * ‫הדרישות? *האם הלקוחות מוכנים להקים קשרי תקשורת מהירים‬
schedule. Primarily, as with other planning documents, the processes get the update automatically. *As a process is steps, which farther may bias the tester and reduce the
Automate only if it is economic. ‫עם המפתח? * האם לקוח מוכן להשתתף בביקורת? *האם‬
main objective is to gain the acceptance and approval of the broken down into its constituent parts, you are left with a exploration space. The tester is encouraged to choose the
Criteria FOR TEST AUTOMATION TOOLS Does the tool ‫הלקוח מיומן ומבין אילוצים טכניים? *האם הלקוח מבין את תהליך‬
stakeholders in the test effort. As such, the document set of tasks – and it is surprising how often these tasks test data to be used in the test steps. 3.Medium degree of
support different mobile app types(native, hybrid, web ,‫ דרגת חומרה‬,‫ (כשל צפוי‬:‫פיתוח תוכנה? טבלת ניהול סיכונים‬
should avoid detail that would not be understood, or would appear in other processes *Dynamic creation of an overall exploration the tester is provided with one ore more high
goals for the test session. At the same time additional apps)? *Which mobile platforms are supported (Android, )‫ תאור‬,‫ גורם מטפל‬,‫ דרגת דחיפות‬,‫סיכוי להתרחשות‬
be considered irrelevant by the stakeholders in the test microservice-based approach to process requires complex
effort. *Secondly, the test plan forms the framework within restrictions are required, that may bias and thus limit the iOS, Windows Phone, BlackBerry)? *Which recognition
orchestration
which the test roles work for the given schedule. It directs, tester in his/her testing session. Biasing aspects can be too technology does the tool use (native, image, text,
Testing microservices Testing in microservices architecture
guides and constrains the test effort, focusing the work on can be more challenging than in traditional, monolithic detailed goals, priorities, risks that the tester is required to coordinate)? *Does the tool change the app you want to test
the useful and necessary deliverables focus on, tools used, the functionality that needs to be (e.g.by adding a server, instrumentation)? *Is the tool able
architecture. In combination with continuous integration
Who should be involved in planning? (stakeholders) and deployment, it’s even more complex. It’s important to covered, or the test method to be used. 4.High degree of to execute the tests on real devices as well as on emulators
Principals: These are the persons who buy our software End exploration The tester provided with one ore more high
understand layers of tests and how they differ from each and simulators?*Can the test suite be executed on several
Users: The group of people who work with the software other. Putting effort on automation aspect of your tests, goals for test session, also knowing the test object. Besides
devices at the same time? *Does the tool support all the UI
system Partners: These are the group of people who will that, the tester can freely explore the system. 5.Freestyle
doesn’t mean you shouldn’t drop on manual testing. Only and control elements of the mobile platform *Mobile Test
make the software work in production environment combination of various test approaches gives you Only the test object is provided to the tester. The tester can
Automation and Tools 189 *Is the tool able to awake the
Insiders: These are people inside the organisation knowing confidence in product quality ***The Netflix development freely explore the system.
Session-based test management (SBTM) Is an enhancement device from the sleep or standby mode? * Are all gestures
how the team is working. The test plan can consist of any team established several best practices for designing and
of Exploratory testing *Reflects the concept of time boxing supported like swipe, scroll, click, tapor pinch-to-zoom? *Is
and all of the following: Test assignment, Test strategy, implementing a microservices architecture. * Create a
*Test results should be reported in a consistent and the tool able to simulate native buttons like the back or
Test approach, Test organization, Test estimates, Test Separate Data Store for Each Microservice *Keep Code at a
accountable way *No standard test charter framework is home button? *Does the tool use the device’s soft keyboard
planning, Test resourcing, Test environment, Similar Level of Maturity *Do a Separate Build for Each
requirements/design, Test tools...and more Manage and Microservice *Deploy in Containers *Treat Servers as being used by practitioners in the testing field to enter data?*Can the app be tested in several
control the test plan: Test monitoring activity includes: Stateless Docker Containers Ver. Virtual machines Test charted design The charter is a test plan which is languages?*Does the tool support a programming language
Providing feed back to the team and the other required usually generated from a test strategy *Identifying the major with which you are able to write test scripts?*Can the tool
Containers and virtual machines have similar resource
stakeholders about the progress of the testing efforts. isolation and allocation benefits, but function differently test planning challenges, including the risks *Clarifying the be integrated into your development environment (IDE)?
*Broadcasting the results of testing to associated mission of the test *Product analysis *Products risks *Test *Can the tool be integrated into a continuous integration
because containers virtualize the operating system instead
members.*Finding and tracking the test metrics. *Planning of hardware, containers are more portable and efficient. strategy design points to the approach used *Requirements system? *Can the tool be combined with other tools like a
and estimation and deciding the future course of action *Implicit Expectation: *Existing artifacts (e.g., source code,
Containers include the application and all of its defect management or test management tool? *Is the tool
based on the metrics calculated. dependencies --but share the kernel with other containers, defects database, design documents)
able to connect to a test cloud provider in order to execute
Test Automation Artifacts: Tools & Infrastructure Test charter quality criteria Usefulness: To check if the test
running as isolated processes in user space on the host the tests within in a cloud? *Is the tool well documented .
Repository Management *Automation Tools / Engines / operating system Virtual machines include the application, plan meets the intended functions Accuracy: Checks for
accuracy with respect to any factual statements Efficiency: *Is the tool open source or closed source? *Is there a large
Vehicles*Impact on Defect Management Tool *Monitoring & the necessary binaries and libraries, and an entire guest
Checks if the available resources are used efficiently community/ support behind the tool?
Control the process and outcomes *Reporting & Analyzing operating system -- all of which can amount to tens of GBs.
Tools *…ALM – how it can fit here? *What is missing? *In Hierarchical Organization Often also called centralized Adaptability: Checks for the immunity to change and
your Org *How to connect all this to the world…? organization. Examples: Military, church, traditional unpredictability in the project Clarity: Checks for the self-
crowd testing process An emerging trend in software
Challenges in testing* Debugging: *Multithreads *Long businesses. Key property: The organization has a tree consistency and unambiguity of the test plan Usability:
testing which exploits the benefits, effectiveness, and
cyclic procedure to reach critical point *Debugging of OS (if structure. Decisions are made at the root and communicated Checks if the test plan document is concise, easily
maintainable and well organized Compliance: Checks if it efficiency of crowd sourcing and the cloud platform *It
crashes, so does the debugger) *Creating identical test to the leaf nodes. The decision association is also used for differs from traditional testing methods in that the testing is
environments for many testers *Complete restoration of reporting and communication. Advantages: *Centralized meets the requirements which are externally demanded
Foundation: Checks if it is an output of an effective test carried out by a number of different testers from different
machine state time travelling Ability to record the control over roject selection *One set of management and
planning process Feasibility: Checks if it is within the reach places, and not by hired consultants and professionals. *The
program’s run, Travel back and forth the lines of reporting rocedures for all project participants across all
code,Inspect variables values throughout time, Inspect projects *Established working relationships among people of the organization which performs it software is put to test under diverse realistic platforms
variables history values before certain point So what do Mobile App Testing issues Ensuring Multiple mobile apps which makes it more reliable, *In addition, crowd source
*Clearly established lines of authority to set priorities and
we get Ability to duplicate complicated system resolved conflicts *Clearly defined career path on different platforms *Common bugs issues *Sensors *E2E testing allows for remote usability testing because specific
environments to many testers *Ability to save a machine (or ‫מבנה ארגוני פונקציונאלי המבנה הקלאסי שבו כל יחידה‬ testing *In the lab or on the Wild *Manual , Automation or target groups can be recruited through the crowd. *This
many) state *Ability to restore to a stable starting point ‫ארגונית שייכת לתחום מקצועי ואחראית לביצוע כל הפעולות‬ both *Test Types – Functionality , UX and Performance and method of testing is considered when the software is more
*Ability to debug OS *Ability to inspect dynamic changes in ‫ הנדסה יתרונות פיתוח‬,‫ כספים‬,‫ שיווק‬, ‫ ייצור‬:‫ לדוגמא‬.‫בתחום זה‬ Exploratory, Accessibility, Maintainability *Check user-centric:
multithreads environments *Ability to time travel , ‫ תקשורת בהירה‬, ‫ שימוש יעיל במשאבים‬,‫מומחיות מקצועית‬ Workflows, Details and the Navigation *Security Testing
throughout the program’s run without the need reconstruct ‫ חסרונות‬.‫ ריכוז נושאים אדמיניסטרטיביים‬,‫מסלולי קידום ברורים‬ PRODUCT MANAGEMENT / PRODUCT OWNER
our steps *Reach the point of failure, and start travelling ‫ לכל יח' פונקציונאלית יש לו"ז‬,‫אין סמכות מרכזית לפרויקט‬ crowd testing pro’s and CON’S: Pro’s: Different testers from
PRELIMANARY PHASE Who is my user base? How old is the
back! ‫ קשה לתקשר ולשלב בין בעלי התפקידים‬, ‫ותקציב נפרדים‬ around the world, with differ-ent demographic backgrounds
‫מבנה ארגוני פרויקטלי מבנה ארגוני שבו כל יחידה עוסקת‬ average user? How many men or women are in my target
Some of the most common challenges of Test Data The and skill sets. *Lots of different mobile devices with different
‫ קיימות יחידות‬,‫ בנוסף‬.‫ מוצר או קבוצת מוצרים מוגדרת‬,‫בפרויקט‬ group? Which platform is used most among that user base?
teams may not have adequate test data generator tools hard-ware and software combinations can be used for
‫ יתרונות מנהל פרויקט אחראי‬.‫מטה המשותפות לכל הארגון‬ Which device is used most? Which software version is
knowledge and skills *Test data coverage is often testing. *The mobile app is tested in real world conditions
,‫ התמקדות על השגת מטרות הפרויקט‬, ‫לתכנון וביצוע הפ רויקט‬ installed on most of the phones? What kind of sensors does
incomplete *Less clarity in data requirements covering with real users. *The crowd provides a fresh set of eyes for
‫ ההתקשרות עם הלקוח מתבצעת‬, ‫צוות הפרויקט מחויב לפרויקט‬ my app use? How does the app communicate with the

You might also like