You are on page 1of 61

www.jntuworld.

com

www.jwjobs.net

Data Mining Lab Record

JNTU WORLD [www.jntuworld.com]

Syllabus
Credit Risk Assessment Description: The business of banks is making loans. Assessing the credit worthiness of an applicant is of crucial importance. You have to develop a system to help a loan officer decide whether the credit of a customer is good. Or bad. A banks business rules regarding loans must consider two opposing factors. On the one hand, a bank wants to make as many loans as possible. Interest on these loans is the banks profit source. On the other hand, a bank can not afford to make too many bad loans. Too many bad loans could lead to the collapse of the bank. The banks loan policy must involve a compromise. Not too strict and not too lenient. To do the assignment, you first and foremost need some knowledge about the world of credit. You can acquire such knowledge in a number of ways. 1. Knowledge engineering: Find a loan officer who is willing to talk. Interview her and try to represent her knowledge in a number of ways. 2. Books: Find some training manuals for loan officers or perhaps a suitable textbook on finance. Translate this knowledge from text from to production rule form. 3. Common sense: Imagine yourself as a loan officer and make up reasonable rules which can be used to judge the credit worthiness of a loan applicant. 4. Case histories: Find records of actual cases where competent loan officers correctly judged when and not to. Approve a loan application.

The German Credit Data


Actual historical credit data is not always easy to come by because of confidentiality rules. Here is one such data set. Consisting of 1000 actual cases collected in Germany. In spite of the fact that the data is German, you should probably make use of it for this assignment (Unless you really can consult a real loan officer!) There are 20 attributes used in judging a loan applicant (ie., 7 Numerical attributes and 13 Categorical or Nominal attributes). The goal is to classify the applicant into one of two categories. Good or Bad. Subtasks: 1. List all the categorical (or nominal) attributes and the real valued attributes separately. 2. What attributes do you think might be crucial in making the credit assessment? Come up with some simple rules in plain English using your selected attributes. 3. One type of model that you can create is a Decision tree . train a Decision tree using the complete data set as the training data. Report the model obtained after training. 4. Suppose you use your above model trained on the complete dataset, and classify credit good/bad for each of the examples in the dataset. What % of examples can you classify correctly?(This is also called testing on the training set) why do you think can not get 100% training accuracy?

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record

JNTU WORLD [www.jntuworld.com]

5. Is testing on the training set as you did above a good idea? Why or why not? 6. One approach for solving the problem encountered in the previous question is using crossvalidation? Describe what is cross validation briefly. Train a decision tree again using cross validation and report your results. Does accuracy increase/decrease? Why? 7. Check to see if the data shows a bias against foreign workers or personal-status. One way to do this is to remove these attributes from the data set and see if the decision tree created in those cases is significantly different from the full dataset case which you have already done. Did removing these attributes have any significantly effect? Discuss. 8. Another question might be, do you really need to input so many attributes to get good results? May be only a few would do. For example, you could try just having attributes 2,3,5,7,10,17 and 21. Try out some combinations.(You had removed two attributes in problem 7. Remember to reload the arff data file to get all the attributes initially before you start selecting the ones you want.) 9. Sometimes, The cost of rejecting an applicant who actually has good credit might be higher than accepting an applicant who has bad credit. Instead of counting the misclassification equally in both cases, give a higher cost to the first case ( say cost 5) and lower cost to the second case. By using a cost matrix in weak. Train your decision tree and report the Decision Tree and cross validation results. Are they significantly different from results obtained in problem 6.

10. Do you think it is a good idea to prefect simple decision trees instead of having long complex decision tress? How does the complexity of a Decision Tree relate to the bias of the model?

11. You can make your Decision Trees simpler by pruning the nodes. One approach is to use Reduced Error Pruning. Explain this idea briefly. Try reduced error pruning for training your Decision Trees using cross validation and report the Decision Trees you obtain? Also Report your accuracy using the pruned model Does your Accuracy increase? 12. How can you convert a Decision Tree into if-then-else rules. Make up your own small Decision Tree consisting 2-3 levels and convert into a set of rules. There also exist different classifiers that output the model in the form of rules. One such classifier in weka is rules. PART, train this model and report the set of rules obtained. Sometimes just one attribute can be good enough in making the decision, yes, just one ! Can you predict what attribute that might be in this data set? OneR classifier uses a single attribute to make decisions(it chooses the attribute based on minimum error).Report the rule obtained by training a one R classifier. Rank the performance of j48,PART,oneR.

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record

JNTU WORLD [www.jntuworld.com]

1. Weka Introduction

Weka is created by researchers at the university WIKATO in NewZealand. University of


Waikato, Hamilton, New Zealand Alex Seewald (original Command-line primer) David Scuse (original Experimenter tutorial) It is java based application. It is collection often source, Machine Learning Algorithm. The routines (functions) are implemented as classes and logically arranged in packages. It comes with an extensive GUI Interface. Weka routines can be used standalone via the command line interface.

The Graphical User Interface


The Weka GUI Chooser (class weka.gui.GUIChooser) provides a starting point for launching Wekas main GUI applications and supporting tools. If one prefers a MDI (multiple document interface) appearance, then this is provided by an alternative launcher called Main (class weka.gui.Main). The GUI Chooser consists of four buttonsone for each of the four major Weka applicationsand four menus.

The buttons can be used to start the following applications: Explorer An environment for exploring data with WEKA (the rest of this documentation deals with this application in more detail). Experimenter An environment for performing experiments and conducting statistical tests between learning schemes.

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record

JNTU WORLD [www.jntuworld.com]

Knowledge Flow This environment supports essentially the same functions as the Explorer but with a drag-and-drop interface. One advantage is that it supports incremental learning. SimpleCLI Provides a simple command-line interface that allows direct execution of WEKA commands for operating systems that do not provide their own command line interface.

I. Explorer The Graphical user interface


1.1 Section Tabs At the very top of the window, just below the title bar, is a row of tabs. When the Explorer is first started only the first tab is active; the others are greyed out. This is because it is necessary to open (and potentially pre-process) a data set before starting to explore the data. The tabs are as follows: 1. Preprocess. Choose and modify the data being acted on. 2. Classify. Train & test learning schemes that classify or perform regression 3. Cluster. Learn clusters for the data. 4. Associate. Learn association rules for the data. 5. Select attributes. Select the most relevant attributes in the data. 6. Visualize. View an interactive 2D plot of the data. Once the tabs are active, clicking on them flicks between different screens, on which the respective actions can be performed. The bottom area of the window (including the status box, the log button, and the Weka bird) stays visible regardless of which section you are in. The Explorer can be easily extended with custom tabs. The Wiki article Adding tabs in the Explorer [7] explains this in detail.

II. Experimenter
2.1 Introduction The Weka Experiment Environment enables the user to create, run, modify, and analyse experiments in a more convenient manner than is possible when processing the schemes individually. For example, the user can create an experiment that runs several schemes against a series of datasets and then analyse the results to determine if one of the schemes is (statistically) better than the other schemes.

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record

JNTU WORLD [www.jntuworld.com]

The Experiment Environment can be run from the command line using the Simple CLI. For example, the following commands could be typed into the CLI to run the OneR scheme on the Iris dataset using a basic train and test process. (Note that the commands would be typed on one line into the CLI.) While commands can be typed directly into the CLI, this technique is not particularly convenient and the experiments are not easy to modify. The Experimenter comes in two flavours, either with a simple interface that provides most of the functionality one needs for experiments, or with an interface with full access to the Experimenters capabilities. You can choose between those two with the Experiment Configuration Mode radio buttons: Simple Advanced Both setups allow you to setup standard experiments, that are run locally on a single machine, or remote experiments, which are distributed between several hosts. The distribution of experiments cuts down the time the experiments will take until completion, but on the other hand the setup takes more time. The next section covers the standard experiments (both, simple and advanced), followed by the remote experiments and finally the analysing of the results.

III. Knowledge Flow


3.1 Introduction The Knowledge Flow provides an alternative to the Explorer as a graphical front end to WEKAs core algorithms.

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record

JNTU WORLD [www.jntuworld.com]

The KnowledgeFlow presents a data-flow inspired interface to WEKA. The user can selectWEKA components from a palette, place them on a layout canvas and connect them together in order to form a knowledge flow for processing and analyzing data. At present, all of WEKAs classifiers, filters, clusterers, associators, loaders and savers are available in the KnowledgeFlow along withsome extra tools.

The Knowledge Flow can handle data either incrementally or in batches (the Explorer handles batch data only). Of course learning from data incremen- tally requires a classifier that can be updated on an instance by instance basis. Currently in WEKA there are ten classifiers that can handle data incrementally.

The Knowledge Flow offers the following features: intuitive data flow style layout process data in batches or incrementally process multiple batches or streams in parallel (each separate flow executes in its own thread) process multiple streams sequentially via a user-specified order of execution chain filters together

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record

JNTU WORLD [www.jntuworld.com]

view models produced by classifiers for each fold in a cross validation visualize performance of incremental classifiers during processing (scrolling plots of classification accuracy, RMS error, predictions etc.) plugin perspectives that add major new functionality (e.g. 3D data visualization, time series forecasting environment etc.)

IV. Simple CLI


The Simple CLI provides full access to all Weka classes, i.e., classifiers, filters, clusterers, etc., but without the hassle of the CLASSPATH (it facilitates the one, with which Weka was started). It offers a simple Weka shell with separated command line and output.

4.1 Commands The following commands are available in the Simple CLI: java <classname> [<args>] invokes a java class with the given arguments (if any) break

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record

JNTU WORLD [www.jntuworld.com]

stops the current thread, e.g., a running classifier, in a friendly manner killstops the current thread in an unfriendly fashion cls clears the output area capabilities <classname> [<args>] lists the capabilities of the specified class, e.g., for a classifier with its option: capabilities weka.classifiers.meta.Bagging -W weka.classifiers.trees.Id3 exit exits the Simple CLI help [<command>] provides an overview of the available commands if without a command name as argument, otherwise more help on the specified command 4.2 Invocation In order to invoke a Weka class, one has only to prefix the class with java. This command tells the Simple CLI to load a class and execute it with any given parameters. E.g., the J48 classifier can be invoked on the iris dataset with the following command: java weka.classifiers.trees.J48 -t c:/temp/iris.arff This results in the following output:

4.3 Command redirection Starting with this version of Weka one can perform a basic redirection:

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record

JNTU WORLD [www.jntuworld.com]

java weka.classifiers.trees.J48 -t test.arff > j48.txt Note: the > must be preceded and followed by a space, otherwise it is not recognized as redirection, but part of another parameter.

4.4 Command completion Commands starting with java support completion for classnames and filenames via Tab (Alt+BackSpace deletes parts of the command again). In case that there are several matches, Weka lists all possible matches. package name completion java weka.cl<Tab> results in the following output of possible matches of package names: Possible matches: weka.classifiers weka.clusterers classname completion java weka.classifiers.meta.A<Tab> lists the following classes Possible matches: weka.classifiers.meta.AdaBoostM1 weka.classifiers.meta.AdditiveRegression weka.classifiers.meta.AttributeSelectedClassifier filename completion In order for Weka to determine whether a the string under the cursor is a classname or a filename, filenames need to be absolute (Unix/Linx: /some/path/file;Windows: C:\Some\Path\file) or relative and starting with a dot (Unix/Linux: ./some/other/path/file;Windows: .\Some\Other\Path\file).

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record

JNTU WORLD [www.jntuworld.com]

2. ARFF File Format


An ARFF (= Attribute-Relation File Format ) file is an ASCII text file that describes a list of instances sharing a set of attributes. ARFF files are not the only format one can load, but all files that can be converted with Wekas core converters. The following formats are currently supported: ARFF (+ compressed) C4.5 CSV libsvm binary serialized instances XRFF (+ compressed) 10.1 Overview ARFF files have two distinct sections. The first section is the Header information, which is followed the Data information. The Header of the ARFF file contains the name of the relation, a list of the attributes (the columns in the data), and their types. An example header on the standard IRIS dataset looks like this: % 1. Title: Iris Plants Database % % 2. Sources: % (a) Creator: R.A. Fisher % (b) Donor: Michael Marshall (MARSHALL%PLU@io.arc.nasa.gov) % (c) Date: July, 1988 % @RELATION iris @ATTRIBUTE sepallength NUMERIC @ATTRIBUTE sepalwidth NUMERIC @ATTRIBUTE petallength NUMERIC @ATTRIBUTE petalwidth NUMERIC @ATTRIBUTE class {Iris-setosa,Iris-versicolor,Iris-virginica} The Data of the ARFF file looks like the following: @DATA 5.1,3.5,1.4,0.2,Iris-setosa 4.9,3.0,1.4,0.2,Iris-setosa 4.7,3.2,1.3,0.2,Iris-setosa 4.6,3.1,1.5,0.2,Iris-setosa 5.0,3.6,1.4,0.2,Iris-setosa 5.4,3.9,1.7,0.4,Iris-setosa 4.6,3.4,1.4,0.3,Iris-setosa 5.0,3.4,1.5,0.2,Iris-setosa 4.4,2.9,1.4,0.2,Iris-setosa 4.9,3.1,1.5,0.1,Iris-setosa Lines that begin with a % are comments.

10

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record

JNTU WORLD [www.jntuworld.com]

The @RELATION, @ATTRIBUTE and @DATA declarations are case insensitive. The ARFF Header Section The ARFF Header section of the file contains the relation declaration and attribute declarations. The @relation Declaration The relation name is defined as the first line in the ARFF file. The format is: @relation <relation-name> where <relation-name> is a string. The string must be quoted if the name includes spaces. The @attribute Declarations Attribute declarations take the form of an ordered sequence of @attribute statements. Each attribute in the data set has its own @attribute statement which uniquely defines the name of that attribute and its data type. The order the attributes are declared indicates the column position in the data section of the file. For example, if an attribute is the third one declared then Weka expects that all that attributes values will be found in the third comma delimited column. The format for the @attribute statement is: @attribute <attribute-name> <datatype> where the <attribute-name> must start with an alphabetic character. If spaces are to be included in the name then the entire name must be quoted. The <datatype> can be any of the four types supported by Weka: numeric integer is treated as numeric real is treated as numeric <nominal-specification> string date [<date-format>] relational for multi-instance data (for future use) where <nominal-specification> and <date-format> are defined below. The keywords numeric, real, integer, string and date are case insensitive.

Numeric attributes Numeric attributes can be real or integer numbers. Nominal attributes Nominal values are defined by providing an <nominal-specification> listing the possible values: <nominal-name1>, <nominal-name2>, <nominal-name3>, ... For example, the class value of the Iris dataset can be defined as follows: @ATTRIBUTE class {Iris-setosa,Iris-versicolor,Iris-virginica} Values that contain spaces must be quoted.

11

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record

JNTU WORLD [www.jntuworld.com]

String attributes String attributes allow us to create attributes containing arbitrary textual values. This is very useful in text-mining applications, as we can create datasets with string attributes, then writeWeka Filters to manipulate strings (like String- ToWordVectorFilter). String attributes are declared as follows: @ATTRIBUTE LCC string Date attributes Date attribute declarations take the form: @attribute <name> date [<date-format>] where <name> is the name for the attribute and <date-format> is an optional string specifying how date values should be parsed and printed (this is the same format used by SimpleDateFormat). The default format string accepts the ISO-8601 combined date and time format: yyyy-MMddTHH:mm:ss. Dates must be specified in the data section as the corresponding string representations of the date/time (see example below). Relational attributes Relational attribute declarations take the form: @attribute <name> relational <further attribute definitions> @end <name> For the multi-instance dataset MUSK1 the definition would look like this (... denotes an omission): @attribute molecule_name {MUSK-jf78,...,NON-MUSK-199} @attribute bag relational @attribute f1 numeric ... @attribute f166 numeric @end bag @attribute class {0,1} ...

The ARFF Data Section


The ARFF Data section of the file contains the data declaration line and the actual instance lines. The @data Declaration The @data declaration is a single line denoting the start of the data segment in the file. The format is: @data The instance data Each instance is represented on a single line, with carriage returns denoting the end of the instance. A percent sign (%) introduces a comment, which continues to the end of the line. Attribute values for each instance are delimited by commas. They must appear in the order that they were declared in the header section (i.e. the data corresponding to the nth @attribute declaration is always the nth field of the attribute). Missing values are represented by a single question mark, as in: @data 4.4,?,1.5,?,Iris-setosa

12

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record

JNTU WORLD [www.jntuworld.com]

Values of string and nominal attributes are case sensitive, and any that contain space or the commentdelimiter character % must be quoted. (The code suggests that double-quotes are acceptable and that a backslash will escape individual characters.) An example follows: @relation LCCvsLCSH @attribute LCC string @attribute LCSH string @data AG5, Encyclopedias and dictionaries.;Twentieth century. AS262, Science -- Soviet Union -- History. AE5, Encyclopedias and dictionaries. AS281, Astronomy, Assyro-Babylonian.;Moon -- Phases. AS281, Astronomy, Assyro-Babylonian.;Moon -- Tables. Dates must be specified in the data section using the string representation specified in the attribute declaration. For example: @RELATION Timestamps @ATTRIBUTE timestamp DATE "yyyy-MM-dd HH:mm:ss" @DATA "2001-04-03 12:12:12" "2001-05-03 12:59:55" Relational data must be enclosed within double quotes . For example an instance of the MUSK1 dataset (... denotes an omission): MUSK-188,"42,...,30",1

13

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record

JNTU WORLD [www.jntuworld.com]

3. Preprocess Tab
1. Loading Data The first four buttons at the top of the preprocess section enable you to load data into WEKA: 1. Open file.... Brings up a dialog box allowing you to browse for the data file on the local file system. 2. Open URL.... Asks for a Uniform Resource Locator address for where the data is stored. 3. Open DB.... Reads data from a database. (Note that to make this work you might have to edit the file in weka/experiment/DatabaseUtils.props.) 4. Generate.... Enables you to generate artificial data from a variety of Data Generators. Using the Open file... button you can read files in a variety of formats: WEKAs ARFF format, CSV format, C4.5 format, or serialized Instances format. ARFF files typically have a .arff extension, CSV files a .csv extension, C4.5 files a .data and .names extension, and serialized Instances objects a .bsi extension.

The Current Relation: Once some data has been loaded, the Preprocess panel shows a variety of information. The Current relation box (the current relation is the currently loaded data, which can be interpreted as a single relational table in database terminology) has three entries: 1. Relation. The name of the relation, as given in the file it was loaded from. Filters (described below) modify the name of a relation.

14

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record

JNTU WORLD [www.jntuworld.com]

2. Instances. The number of instances (data points/records) in the data. 3. Attributes. The number of attributes (features) in the data. 2.3 Working with Attributes Below the Current relation box is a box titled Attributes. There are four Buttons, and beneath them is a list of the attributes in the current relation. The list has three columns: 1. No... A number that identifies the attribute in the order they are specified in the data file. 2. Selection tick boxes. These allow you select which attributes are present in the relation. 3. Name. The name of the attribute, as it was declared in the data file. When you click on different rows in the list of attributes, the fields change in the box to the right titled Selected attribute. This box displays the characteristics of the currently highlighted attribute in the list: 1. Name. The name of the attribute, the same as that given in the attribute list. 2. Type. The type of attribute, most commonly Nominal or Numeric. 3. Missing. The number (and percentage) of instances in the data for which this attribute is missing (unspecified). 4. Distinct. The number of different values that the data contains for this attribute. 5. Unique. The number (and percentage) of instances in the data having a value for this attribute that no other instances have. Below these statistics is a list showing more information about the values stored in this attribute, which differ depending on its type. If the attribute is nominal, the list consists of each possible value for the attribute along with the number of instances that have that value. If the attribute is numeric, the list gives four statistics describing the distribution of values in the datathe minimum, maximum, mean and standard deviation. And below these statistics there is a coloured histogram, colour-coded according to the attribute chosen as the Class using the box above the histogram. (This box will bring up a drop-down list of available selections when clicked.) Note that only nominal Class attributes will result in a colour-coding. Finally, after pressing the Visualize All button, histograms for all the attributes in the data are shown in a separate window. Returning to the attribute list, to begin with all the tick boxes are unticked. They can be toggled on/off by clicking on them individually. The four buttons above can also be used to change the selection: PREPROCESSING 1. All. All boxes are ticked. 2. None. All boxes are cleared (unticked). 3. Invert. Boxes that are ticked become unticked and vice versa. 4. Pattern. Enables the user to select attributes based on a Perl 5 Regular Expression. E.g., .* id selects all attributes which name ends with id.

15

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record

JNTU WORLD [www.jntuworld.com]

Once the desired attributes have been selected, they can be removed by clicking the Remove button below the list of attributes. Note that this can be undone by clicking the Undo button, which is located next to the Edit button in the top-right corner of the Preprocess panel. Working With Filters The preprocess section allows filters to be defined that transform the data in various ways. The Filter box is used to set up the filters that are required. At the left of the Filter box is a Choose button. By clicking this button it is possible to select one of the filters in WEKA. Once a filter has been selected, its name and options are shown in the field next to the Choose button. Clicking on this box with the left mouse button brings up a GenericObjectEditor dialog box. A click with the right mouse button (or Alt+Shift+left click) brings up a menu where you can choose, either to display the properties in a GenericObjectEditor dialog box, or to copy the current setup string to the clipboard.

The GenericObjectEditor Dialog Box The GenericObjectEditor dialog box lets you configure a filter. The same kind of dialog box is used to configure other objects, such as classifiers and clusterers (see below). The fields in the window reflect the available options. Right-clicking (or Alt+Shift+LeftClick) on such a field will bring up a popup menu, listing the following options:

16

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record

JNTU WORLD [www.jntuworld.com]

1. Show properties... has the same effect as left-clicking on the field, i.e., a dialog appears allowing you to alter the settings. 2. Copy configuration to clipboard copies the currently displayed configuration string to the systems clipboard and therefore can be used anywhere else in WEKA or in the console. This is rather handy if you have to setup complicated, nested schemes. 3. Enter configuration... is the receiving end for configurations that got copied to the clipboard earlier on. In this dialog you can enter a class name followed by options (if the class supports these). This also allows you to transfer a filter setting from the Preprocess panel to a Filtered Classifier used in the Classify panel. Left-Clicking on any of these gives an opportunity to alter the filters settings. For example, the setting may take a text string, in which case you type the string into the text field provided. Or it may give a drop-down box listing several states to choose from. Or it may do something else, depending on the information required. Information on the options is provided in a tool tip if you let the mouse pointer hover of the corresponding field. More information on the filter and its options can be obtained by clicking on the More button in the About panel at the top of the GenericObjectEditor window. Some objects display a brief description of what they do in an About box, along with a More button. Clicking on the More button brings up a window describing what the different options do. Others have an additional button, Capabilities, which lists the types of attributes and classes the object can handle. At the bottom of the GenericObjectEditor dialog are four buttons. The first two, Open... and Save... allow object configurations to be stored for future use. The Cancel button backs out without remembering any changes that have been made. Once you are happy with the object and settings you have chosen, click OK to return to the main Explorer window. Applying Filters Once you have selected and configured a filter, you can apply it to the data by pressing the Apply button at the right end of the Filter panel in the Preprocess panel. The Preprocess panel will then show the transformed data. The change can be undone by pressing the Undo button. You can also use the Edit...button to modify your data manually in a dataset editor. Finally, the Save... button at the top right of the Preprocess panel saves the current version of the relation in file formats that can represent the relation, allowing it to be kept for future use. Note: Some of the filters behave differently depending on whether a class attribute has been set or not (using the box above the histogram, which will bring up a drop-down list of possible selections when clicked). In particular, the supervised filters require a class attribute to be set, and some of the unsupervised attribute filters will skip the class attribute if one is set. Note that it is also possible to set Class to None, in which case no class is set.

17

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record

JNTU WORLD [www.jntuworld.com]

4. Classification Tab
5.3.1 Selecting a Classifier At the top of the classify section is the Classifier box. This box has a text field that gives the name of the currently selected classifier, and its options. Clicking on the text box with the left mouse button brings up a GenericObjectEditor dialog box, just the same as for filters, that you can use to configure the options of the current classifier. With a right click (or Alt+Shift+left click) you can once again copy the setup string to the clipboard or display the properties in a GenericObjectEditor dialog box. The Choose button allows you to choose one of the classifiers that are available in WEKA. 5.3.2 Test Options The result of applying the chosen classifier will be tested according to the options that are set by clicking in the Test options box. There are four test modes: 1. Use training set. The classifier is evaluated on how well it predicts the class of the instances it was trained on. 2. Supplied test set. The classifier is evaluated on how well it predicts the class of a set of instances loaded from a file. Clicking the Set... button brings up a dialog allowing you to choose the file to test on. 3. Cross-validation. The classifier is evaluated by cross-validation, using the number of folds that are entered in the Folds text field. 4. Percentage split. The classifier is evaluated on how well it predicts a certain percentage of the data which is held out for testing. The amount of data held out depends on the value entered in the % field. Note: No matter which evaluation method is used, the model that is output is Always the one build from all the training data. Further testing options can be Set by clicking on the More options... button:

18

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record

JNTU WORLD [www.jntuworld.com]

1. Output model. The classification model on the full training set is output so that it can be viewed, visualized, etc. This option is selected by default. 2. Output per-class stats. The precision/recall and true/false statistics for each class are output. This option is also selected by default. 3. Output entropy evaluation measures. Entropy evaluation measures are included in the output. This option is not selected by default. 4. Output confusion matrix. The confusion matrix of the classifiers predictions is included in the output. This option is selected by default. 5. Store predictions for visualization. The classifiers predictions are remembered so that they can be visualized. This option is selected by default. 6. Output predictions. The predictions on the evaluation data are output. Note that in the case of a cross-validation the instance numbers do not correspond to the location in the data!

19

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record

JNTU WORLD [www.jntuworld.com]

7. Output additional attributes. If additional attributes need to be output alongside the predictions, e.g., an ID attribute for tracking misclassifications, then the index of this attribute can be specified here. The usual Weka ranges are supported,first and last are therefore valid indices as well (example: first-3,6,8,12-last). 8. Cost-sensitive evaluation. The errors is evaluated with respect to a cost matrix. The Set... button allows you to specify the cost matrix used. 9. Random seed for xval / % Split. This specifies the random seed used when randomizing the data before it is divided up for evaluation purposes. 10. Preserve order for % Split. This suppresses the randomization of the data before splitting into train and test set. 11. Output source code. If the classifier can output the built model as Java source code, you can specify the class name here. The code will be printed in the Classifier output area.

5.3.3 The Class Attribute


The classifiers in WEKA are designed to be trained to predict a single class attribute, which is the target for prediction. Some classifiers can only learn nominal classes; others can only learn numeric classes (regression problems) still others can learn both. By default, the class is taken to be the last attribute in the data. If you want to train a classifier to predict a different attribute, click on the box below the Test options box to bring up a drop-down list of attributes to choose from. 5.3.4 Training a Classifier Once the classifier, test options and class have all been set, the learning process is started by clicking on the Start button. While the classifier is busy being trained, the little bird moves around. You can stop the training process at any time by clicking on the Stop button. When training is complete, several things happen. The Classifier output area to the right of the display is filled with text describing the results of training and testing. A new entry appears in the Result list box. We look at the result list below; but first we investigate the text that has been output. 5.3.5 The Classifier Output Text The text in the Classifier output area has scroll bars allowing you to browse the results. Clicking with the left mouse button into the text area, while holding Alt and Shift, brings up a dialog that enables you to save the displayed output in a variety of formats (currently, BMP, EPS, JPEG and PNG). Of course, you can also resize the Explorer window to get a larger display area. The output is Split into several sections: 1. Run information. A list of information giving the learning scheme options, relation name, instances, attributes and test mode that were involved in the process.

20

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record

JNTU WORLD [www.jntuworld.com]

2. Classifier model (full training set). A textual representation of the classification model that was produced on the full training data. 3. The results of the chosen test mode are broken down thus. 4. Summary. A list of statistics summarizing how accurately the classifier was able to predict the true class of the instances under the chosen test mode. 5. Detailed Accuracy By Class. A more detailed per-class break down of the classifiers prediction accuracy. 6. Confusion Matrix. Shows how many instances have been assigned to each class. Elements show the number of test examples whose actual class is the row and whose predicted class is the column. 7. Source code (optional). This section lists the Java source code if one chose Output source code in the More options dialog.

21

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record

JNTU WORLD [www.jntuworld.com]

5. Clustering Tab
5.4.1 Selecting a Clusterer By now you will be familiar with the process of selecting and configuring objects. Clicking on the clustering scheme listed in the Clusterer box at the top of the window brings up a GenericObjectEditor dialog with which to choose a new clustering scheme.

Cluster Modes The Cluster mode box is used to choose what to cluster and how to evaluate the results. The first three options are the same as for classification: Use training set, Supplied test set and Percentage split (Section 5.3.1)except that now the data is assigned to clusters instead of trying to predict a specific class. The fourth mode, Classes to clusters evaluation, compares how well the chosen clusters match up with a pre-assigned class in the data. The drop-down box below this option selects the class, just as in the Classify panel. An additional option in the Cluster mode box, the Store clusters for visualization tick box, determines whether or not it will be possible to visualize the clusters once training is complete. When dealing with datasets that are so large that memory becomes a problem it may be helpful to disable this option.

22

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record 5.4.3 Ignoring Attributes

JNTU WORLD [www.jntuworld.com]

Often, some attributes in the data should be ignored when clustering. The Ignore attributes button brings up a small window that allows you to select which attributes are ignored. Clicking on an attribute in the window highlights it, holding down the SHIFT key selects a range of consecutive attributes, and holding down CTRL toggles individual attributes on and off. To cancel the selection, back out with the Cancel button. To activate it, click the Select button. The next time clustering is invoked, the selected attributes are ignored. 5.4.4 Working with Filters The Filtered Clusterer meta-clusterer offers the user the possibility to apply filters directly before the clusterer is learned. This approach eliminates the manual application of a filter in the Preprocess panel, since the data gets processed on the fly. Useful if one needs to try out different filter setups. 5.4.5 Learning Clusters The Cluster section, like the Classify section, has Start/Stop buttons, a result text area and a result list. These all behave just like their classification counterparts. Right-clicking an entry in the result list brings up a similar menu, except that it shows only two visualization options: Visualize cluster assignments and Visualize tree. The latter is grayed out when it is not applicable.

23

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record

JNTU WORLD [www.jntuworld.com]

6. Associate Tab
5.5.1 Setting Up This panel contains schemes for learning association rules, and the learners are chosen and configured in the same way as the clusterers, filters, and classifiers in the other panels.

5.5.2 Learning Associations Once appropriate parameters for the association rule learner have been set, click the Start button. When complete, right-clicking on an entry in the result list allows the results to be viewed or saved.

24

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record

JNTU WORLD [www.jntuworld.com]

7. Selecting Attributes Tab


5.6.1 Searching and Evaluating Attribute selection involves searching through all possible combinations of attributes in the data to find which subset of attributes works best for prediction. To do this, two objects must be set up: an attribute evaluator and a search method. The evaluator determines what method is used to assign a worth to each subset of attributes. The search method determines what style of search is performed.

25

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record 5.6.2 Options The Attribute Selection Mode box has two options:

JNTU WORLD [www.jntuworld.com]

1. Use full training set. The worth of the attribute subset is determined using the full set of training data. 2. Cross-validation. The worth of the attribute subset is determined by a process of cross-validation. The Fold and Seed fields set the number of folds to use and the random seed used when shuffling the data. As with Classify (Section 5.3.1), there is a drop-down box that can be used to specify which attribute to treat as the class. 5.6.3 Performing Selection Clicking Start starts running the attribute selection process. When it is finished, the results are output into the result area, and an entry is added to the result list. Right-clicking on the result list gives several options. The first three, (View in main window, View in separate window and Save result buffer), are the same as for the classify panel. It is also possible to Visualize reduced data, or if you have used an attribute transformer such as Principal Components, Visualize transformed data. The reduced/transformed data can be saved to a file with the Save reduced data... or Save transformed data... option. In case one wants to reduce/transform a training and a test at the same time and not use the Attribute Selected Classifier from the classifier panel, it is best to use the Attribute Selection filter (a supervised attribute filter) in batch mode (-b) from the command line or in the Simple CLI. The batch mode allows one to specify an additional input and output file pair (options -r and -s), that is processed with the filter setup that was determined based on the training data.

26

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record

JNTU WORLD [www.jntuworld.com]

8. Visualizing Tab
WEKAs visualization section allows you to visualize 2D plots of the current relation.

5.7.1 The scatter plot matrix When you select the Visualize panel, it shows a scatter plot matrix for all the attributes, colour coded according to the currently selected class. It is possible to change the size of each individual 2D plot and the point size, and to randomly jitter the data (to uncover obscured points). It also possible to change the attribute used to colour the plots, to select only a subset of attributes for inclusion in the scatter plot matrix, and to sub sample the data. Note that changes will only come into effect once the Update button has been pressed. 5.7.2 Selecting an individual 2D scatter plot When you click on a cell in the scatter plot matrix, this will bring up a separate window with a visualization of the scatter plot you selected. (We described above how to visualize particular results in a separate windowfor example, classifier errorsthe same visualization controls are used here.) Data points are plotted in the main area of the window. At the top are two drop-down list buttons for selecting the axes to plot. The one on the left shows which attribute is used for the x-axis; the one on the right shows which is used for the y-axis. Beneath the x-axis selector is a drop-down list for choosing the colour scheme. This allows you to colour the points based on the attribute selected. Below the plot area, a legend describes what

27

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record

JNTU WORLD [www.jntuworld.com]

values the colours correspond to. If the values are discrete, you can modify the colour used for each one by clicking on them and making an appropriate selection in the window that pops up. To the right of the plot area is a series of horizontal strips. Each strip represents an attribute, and the dots within it show the distribution of values of the attribute. These values are randomly scattered vertically to help you see concentrations of points. You can choose what axes are used in the main graph by clicking on these strips. Left-clicking an attribute strip changes the x-axis to that attribute, whereas right-clicking changes the y-axis. The X and Y written beside the strips shows what the current axes are (B is used for both X and Y). Above the attribute strips is a slider labelled Jitter, which is a random displacement given to all points in the plot. Dragging it to the right increases the amount of jitter, which is useful for spotting concentrations of points. Without jitter, a million instances at the same point would look no different to just a single lonely instance. 5.7.3 Selecting Instances There may be situations where it is helpful to select a subset of the data using the visualization tool. (A special case of this is the User Classifier in the Classify panel, which lets you build your own classifier by interactively selecting instances.) Below the y-axis selector button is a drop-down list button for choosing a selection method. A group of data points can be selected in four ways: 1. Select Instance. Clicking on an individual data point brings up a window listing its attributes. If more than one point appears at the same location, more than one set of attributes is shown. 2. Rectangle. You can create a rectangle, by dragging, that selects the points inside it. 3. Polygon. You can build a free-form polygon that selects the points inside it. Left-click to add vertices to the polygon, right-click to complete it. The polygon will always be closed off by connecting the first point to the last. 4. Polyline. You can build a polyline that distinguishes the points on one side from those on the other. Left-click to add vertices to the polyline, right-click to finish. The resulting shape is open (as opposed to a polygon, which is always closed). Once an area of the plot has been selected using Rectangle, Polygon or Polyline, it turns grey. At this point, clicking the Submit button removes all instances from the plot except those within the grey selection area. Clicking on the Clear button erases the selected area without affecting the graph. Once any points have been removed from the graph, the Submit button changes to a Reset button. This button undoes all previous removals and returns you to the original graph with all points included. Finally, clicking the Save button allows you to save the currently visible instances to a new ARFF file.

28

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record

JNTU WORLD [www.jntuworld.com]

9. Description of German Credit Data.


Credit Risk Assessment Description: The business of banks is making loans. Assessing the credit worthiness of an applicant is of crucial importance. You have to develop a system to help a loan officer decide whether the credit of a customer is good. Or bad. A banks business rules regarding loans must consider two opposing factors. On th one han, a bank wants to make as many loans as possible. Interest on these loans is the banks profit source. On the other hand, a bank can not afford to make too many bad loans. Too many bad loans could lead to the collapse of the bank. The banks loan policy must involved a compromise. Not too strict and not too lenient. To do the assignment, you first and foremost need some knowledge about the world of credit. You can acquire such knowledge in a number of ways. 1. Knowledge engineering: Find a loan officer who is willing to talk. Interview her and try to represent her knowledge in a number of ways. 2. Books: Find some training manuals for loan officers or perhaps a suitable textbook on finance. Translate this knowledge from text from to production rule form. 3. Common sense: Imagine yourself as a loan officer and make up reasonable rules which can be used to judge the credit worthiness of a loan applicant. 4. Case histories: Find records of actual cases where competent loan officers correctly judged when and not to. Approve a loan application.

The German Credit Data


Actual historical credit data is not always easy to come by because of confidentiality rules. Here is one such data set. Consisting of 1000 actual cases collected in Germany. In spite of the fact that the data is German, you should probably make use of it for this assignment(Unless you really can consult a real loan officer!) There are 20 attributes used in judging a loan applicant( ie., 7 Numerical attributes and 13 Categorical or Nominal attributes). The goal is the classify the applicant into one of two categories. Good or Bad. The total number of attributes present in German credit data are. 1. Checking_Status 2. Duration 3. Credit_history 4. Purpose 5. Credit_amout 6. Savings_status 7. Employment 8. Installment_Commitment 9. Personal_status 10. Other_parties 11. Residence_since 12. Property_Magnitude 13. Age

29

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record 14. Other_payment_plans 15. Housing 16. Existing_credits 17. Job 18. Num_dependents 19. Own_telephone 20. Foreign_worker 21. Class Tasks (Turn in your answers to the following tasks)

JNTU WORLD [www.jntuworld.com]

1. List all the categorical (or nominal) attributes and the real valued attributes separately.

Ans) The following are the Categorical (or Nominal) attributes) 1. Checking_Status 2. Credit_history 3. Purpose 4. Savings_status 5. Employment 6. Personal_status 7. Other_parties 8. Property_Magnitude 9. Other_payment_plans

30

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record 10. Housing 11. Job 12. Own_telephone 13. Foreign_worker The following are the Numerical attributes)

JNTU WORLD [www.jntuworld.com]

1. Duration 2. Credit_amout 3. Installment_Commitment 4. Residence_since 5. Age 6. Existing_credits 7. Num_dependents 2. What attributes do you think might be crucial in making the credit assessment? Come up with some simple rules in plain English using your selected attributes. Ans) The following are the attributes may be crucial in making the credit assessment. 1. Credit_amount 2. Age 3. Job 4. Savings_status 5. Existing_credits 6. Installment_commitment 7. Property_magnitude 3. One type of model that you can create is a Decision tree . train a Decision tree using the complete data set as the training data. Report the model obtained after training. Ans) We created a decision tree by using J48 Technique for the complete dataset as the training data. The following model obtained after training. === Run information === Scheme: weka.classifiers.trees.J48 -C 0.25 -M 2 Relation: german_credit Instances: 1000 Attributes: 21 checking_status duration credit_history purpose credit_amount savings_status employment installment_commitment personal_status other_parties residence_since

31

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record property_magnitude age other_payment_plans housing existing_credits job num_dependents own_telephone foreign_worker class Test mode: evaluate on training data === Classifier model (full training set) === J48 pruned tree -----------------Number of Leaves : Size of the tree : 103 140

JNTU WORLD [www.jntuworld.com]

Time taken to build model: 0.08 seconds === Evaluation on training set === === Summary === Correctly Classified Instances 855 85.5 % Incorrectly Classified Instances 145 14.5 % Kappa statistic 0.6251 Mean absolute error 0.2312 Root mean squared error 0.34 Relative absolute error 55.0377 % Root relative squared error 74.2015 % Coverage of cases (0.95 level) 100 % Mean rel. region size (0.95 level) 93.3 % Total Number of Instances 1000

=== Detailed Accuracy By Class === TP Rate FP Rate Precision Recall F-Measure ROC Area Class 0.956 0.38 0.854 0.956 0.902 0.857 good 0.62 0.044 0.857 0.62 0.72 0.857 bad

32

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record WeightedAvg.0.855 0.279 0.855 0.855 0.847

JNTU WORLD [www.jntuworld.com] 0.857

=== Confusion Matrix === a b <-- classified as 669 31 | a = good 114 186 | b = bad 4. Suppose you use your above model trained on the complete dataset, and classify credit good/bad for each of the examples in the dataset. What % of examples can you classify correctly?(This is also called testing on the training set) why do you think can not get 100% training accuracy? Ans) If we used our above model trained on the complete dataset and classified credit as good/bad for each of the examples in that dataset. We can not get 100% training accuracy only 85.5% of examples, we can classify correctly. 5. Is testing on the training set as you did above a good idea? Why or why not? Ans)It is not good idea by using 100% training data set. 6. One approach for solving the problem encountered in the previous question is using crossvalidation? Describe what is cross validation briefly. Train a decision tree again using cross validation and report your results. Does accuracy increase/decrease? Why? Ans)Cross-Validation Definition: The classifier is evaluated by cross validation using the number of folds that are entered in the folds text field. In Classify Tab, Select cross-validation option and folds size is 2 then Press Start Button, next time change as folds size is 5 then press start, and next time change as folds size is 10 then press start. i) Fold Size-10 Stratified cross-validation === === Summary === Correctly Classified Instances Incorrectly Classified Instances Kappa statistic Mean absolute error Root mean squared error Relative absolute error Root relative squared error Coverage of cases (0.95 level) Mean rel. region size (0.95 level) Total Number of Instances 705 70.5 % 295 29.5 % 0.2467 0.3467 0.4796 82.5233 % 104.6565 % 92.8 % 91.7 % 1000

=== Detailed Accuracy By Class === TP Rate FP Rate Precision Recall F-Measure ROC Area Class

33

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record 0.84 0.39 0.705 0.61 0.16 0.475 0.763 0.511 0.687 0.84 0.39 0.705

JNTU WORLD [www.jntuworld.com] 0.799 0.442 0.692 0.639 0.639 0.639 good bad

Weighted Avg.

=== Confusion Matrix === a b <-- classified as 588 112 | a = good 183 117 | b = bad ii) Fold Size-5 Stratified cross-validation === === Summary === Correctly Classified Instances Incorrectly Classified Instances Kappa statistic Mean absolute error Root mean squared error Relative absolute error Root relative squared error Coverage of cases (0.95 level) Mean rel. region size (0.95 level) Total Number of Instances 733 73.3 % 267 26.7 % 0.3264 0.3293 0.4579 78.3705 % 99.914 % 94.7 % 93 % 1000

=== Detailed Accuracy By Class === TP Rate FP Rate 0.851 0.543 0.457 0.149 Weighted Avg. 0.733 0.425 === Confusion Matrix === a b <-- classified as 596 104 | a = good 163 137 | b = bad iii) Fold Size-2 Stratified cross-validation === === Summary === Correctly Classified Instances Incorrectly Classified Instances Kappa statistic Mean absolute error Root mean squared error Relative absolute error 721 72.1 % 279 27.9 % 0.2443 0.3407 0.4669 81.0491 % Bharath Annamaneni [JNTU WORLD] Precision Recall F-Measure ROC Area Class 0.785 0.851 0.817 0.685 good 0.568 0.457 0.506 0.685 bad 0.72 0.733 0.724 0.685

34

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record Root relative squared error Coverage of cases (0.95 level) Mean rel. region size (0.95 level) Total Number of Instances === Detailed Accuracy By Class === TP Rate FP Rate 0.891 0.677 0.323 0.109 Weighted Avg. 0.721 0.506 === Confusion Matrix === a b <-- classified as 624 76 | a = good 203 97 | b = bad 101.8806 % 92.8 % 91.3 % 1000

JNTU WORLD [www.jntuworld.com]

Precision Recall F-Measure ROC Area Class 0.755 0.891 0.817 0.662 good 0.561 0.323 0.41 0.662 bad 0.696 0.721 0.695 0.662

Note: With this observation, we have seen accuracy is increased when we have folds size is 5 and accuracy is decreased when we have 10 folds.

7. Check to see if the data shows a bias against foreign workers or personal-status. One way to do this is to remove these attributes from the data set and see if the decision tree created in those cases is significantly different from the full dataset case which you have already done. Did removing these attributes have any significantly effect? Discuss. Ans) We use the Preprocess Tab in Weka GUI Explorer to remove an attribute Foreign-workers & Perosnal_status one by one. In Classify Tab, Select Use Training set option then Press Start Button, If these attributes removed from the dataset, we can see change in the accuracy compare to full data set when we removed. i) If Foreign_worker is removed Evaluation on training set === === Summary === Correctly Classified Instances Incorrectly Classified Instances Kappa statistic Mean absolute error Root mean squared error Relative absolute error Root relative squared error Coverage of cases (0.95 level) 859 85.9 % 141 14.1 % 0.6377 0.2233 0.3341 53.1347 % 72.9074 % 100 % Bharath Annamaneni [JNTU WORLD]

35

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record Mean rel. region size (0.95 level) Total Number of Instances 91.9 % 1000

JNTU WORLD [www.jntuworld.com]

=== Detailed Accuracy By Class === TP Rate FP Rate Precision Recall 0.954 0.363 0.86 0.954 0.637 0.046 0.857 0.637 Weighted Avg 0.859 0.268 0.859 0.859 === Confusion Matrix === a b <-- classified as 668 32 | a = good 109 191 | b = bad i) If Personal_status is removed Evaluation on training set === === Summary === Correctly Classified Instances 866 Incorrectly Classified Instances 134 Kappa statistic 0.6582 Mean absolute error 0.2162 Root mean squared error 0.3288 Relative absolute error 51.4483 % Root relative squared error 71.7411 % Coverage of cases (0.95 level) 100 % Mean rel. region size (0.95 level) 91.7 % Total Number of Instances 1000 === Detailed Accuracy By Class === TP Rate FP Rate 0.954 0.34 0.66 0.046 Weighted Avg. 0.866 0.252 === Confusion Matrix === a b <-- classified as 668 32 | a = good 102 198 | b = bad Note: With this observation we have seen, when Foreign_worker attribute is removed from the Dataset, the accuracy is decreased. So this attribute is important for classification. Precision Recall F-Measure ROC Area Class 0.868 0.954 0.909 0.868 good 0.861 0.66 0.747 0.868 bad 0.866 0.866 0.86 0.868 86.6 % 13.4 % F-Measure ROC Area Class 0.905 0.867 good 0.73 0.867 bad 0.852 0.867

36

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record

JNTU WORLD [www.jntuworld.com]

8. Another question might be, do you really need to input so many attributes to get good results? May be only a few would do. For example, you could try just having attributes 2,3,5,7,10,17 and 21. Try out some combinations.(You had removed two attributes in problem 7. Remember to reload the arff data file to get all the attributes initially before you start selecting the ones you want.) Ans) We use the Preprocess Tab in Weka GUI Explorer to remove 2nd attribute (Duration). In Classify Tab, Select Use Training set option then Press Start Button, If these attributes removed from the dataset, we can see change in the accuracy compare to full data set when we removed. === Evaluation on training set === === Summary === Correctly Classified Instances Incorrectly Classified Instances Confusion Matrix === a b <-- classified as 647 53 | a = good 106 194 | b = bad Remember to reload the previous removed attribute, press Undo option in Preprocess tab. We use the Preprocess Tab in Weka GUI Explorer to remove 3rd attribute (Credit_history). In Classify Tab, Select Use Training set option then Press Start Button, If these attributes removed from the dataset, we can see change in the accuracy compare to full data set when we removed. === Evaluation on training set === === Summary === Correctly Classified Instances Incorrectly Classified Instances == Confusion Matrix === a b <-- classified as 645 55 | a = good 106 194 | b = bad 839 161 83.9 % 16.1 % 841 159 84.1 % 15.9 %

Remember to reload the previous removed attribute, press Undo option in Preprocess tab. We use the Preprocess Tab in Weka GUI Explorer to remove 5th attribute (Credit_amount). In Classify Tab, Select Use Training set option then Press Start Button, If these attributes removed from the dataset, we can see change in the accuracy compare to full data set when we removed. === Evaluation on training set === === Summary === Correctly Classified Instances Incorrectly Classified Instances = Confusion Matrix === 864 136 86.4 % 13.6 %

37

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record

JNTU WORLD [www.jntuworld.com]

a b <-- classified as 675 25 | a = good 111 189 | b = bad Remember to reload the previous removed attribute, press Undo option in Preprocess tab. We use the Preprocess Tab in Weka GUI Explorer to remove 7th attribute (Employment). In Classify Tab, Select Use Training set option then Press Start Button, If these attributes removed from the dataset, we can see change in the accuracy compare to full data set when we removed. === Evaluation on training set === === Summary === Correctly Classified Instances Incorrectly Classified Instances == Confusion Matrix === a b <-- classified as 670 30 | a = good 112 188 | b = bad Remember to reload the previous removed attribute, press Undo option in Preprocess tab. We use the Preprocess Tab in Weka GUI Explorer to remove 10th attribute (Other_parties). In Classify Tab, Select Use Training set option then Press Start Button, If these attributes removed from the dataset, we can see change in the accuracy compare to full data set when we removed. Time taken to build model: 0.05 seconds === Evaluation on training set === === Summary === Correctly Classified Instances Incorrectly Classified Instances Confusion Matrix === a b <-- classified as 663 37 | a = good 118 182 | b = bad Remember to reload the previous removed attribute, press Undo option in Preprocess tab. We use the Preprocess Tab in Weka GUI Explorer to remove 17th attribute (Job). In Classify Tab, Select Use Training set option then Press Start Button, If these attributes removed from the dataset, we can see change in the accuracy compare to full data set when we removed. 845 155 84.5 % 15.5 % 858 142 85.8 % 14.2 %

=== Evaluation on training set ===

38

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record === Summary === Correctly Classified Instances Incorrectly Classified Instances === Confusion Matrix === a b <-- classified as 675 25 | a = good 116 184 | b = bad 859 141

JNTU WORLD [www.jntuworld.com]

85.9 % 14.1 %

Remember to reload the previous removed attribute, press Undo option in Preprocess tab. We use the Preprocess Tab in Weka GUI Explorer to remove 21st attribute (Class). In Classify Tab, Select Use Training set option then Press Start Button, If these attributes removed from the dataset, we can see change in the accuracy compare to full data set when we removed. === Evaluation on training set === === Summary === Correctly Classified Instances Incorrectly Classified Instances === Confusion Matrix === a b <-- classified as 963 0 | a = yes 37 0 | b = no Note: With this observation we have seen, when 3rd attribute is removed from the Dataset, the accuracy (83%) is decreased. So this attribute is important for classification. when 2nd and 10th attributes are removed from the Dataset, the accuracy(84%) is same. So we can remove any one among them. when 7th and 17th attributes are removed from the Dataset, the accuracy(85%) is same. So we can remove any one among them. If we remove 5th and 21st attributes the accuracy is increased, so these attributes may not be needed for the classification. 963 37 96.3 % 3.7 %

9. Sometimes, The cost of rejecting an applicant who actually has good credit might be higher than accepting an applicant who has bad credit. Instead of counting the misclassification equally in both cases, give a higher cost to the first case ( say cost 5) and lower cost to the second case. By using a cost matrix in weak. Train your decision tree and report the Decision Tree and cross validation results. Are they significantly different from results obtained in problem 6. Ans) In Weka GUI Explorer, Select Classify Tab, In that Select Use Training set option . In Classify Tab then press Choose button in that select J48 as Decision Tree Technique. In Classify Tab then press More options button then we get classifier evaluation options window in that select cost sensitive evaluation the press set option Button then we get Cost Matrix Editor. In that change classes as 2 then press Resize button. Then we get 2X2 Cost matrix. In Cost Matrix (0,1) location value change as 5, then we get modified cost matrix is as follows.

39

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record

JNTU WORLD [www.jntuworld.com]

0.0 5.0 1.0 0.0

Then close the cost matrix editor, then press ok button. Then press start button. === Evaluation on training set === === Summary === Correctly Classified Instances Incorrectly Classified Instances === Confusion Matrix === a b <-- classified as 669 31 | a = good 114 186 | b = bad Note: With this observation we have seen that ,total 700 customers in that 669 classified as good customers and 31 misclassified as bad customers. In total 300cusotmers, 186 classified as bad customers and 114 misclassified as good customers. 855 145 85.5 % 14.5 %

10. Do you think it is a good idea to prefect simple decision trees instead of having long complex decision tress? How does the complexity of a Decision Tree relate to the bias of the model? Ans) It is Good idea to prefer simple Decision trees, instead of having complex Decision tree. 11. You can make your Decision Trees simpler by pruning the nodes. One approach is to use Reduced Error Pruning. Explain this idea briefly. Try reduced error pruning for training your Decision Trees using cross validation and report the Decision Trees you obtain? Also Report your accuracy using the pruned model Does your Accuracy increase? Ans) We can make our decision tree simpler by pruning the nodes. For that In Weka GUI Explorer, Select Classify Tab, In that Select Use Training set option . In Classify Tab then press Choose button in that select J48 as Decision Tree Technique. Beside Choose Button Press on J48 c 0.25 M2 text we get Generic Object Editor. In that select Reduced Error pruning Property as True then press ok. Then press start button. === Evaluation on training set === === Summary === Correctly Classified Instances Incorrectly Classified Instances == Confusion Matrix === a b <-- classified as 662 38 | a = good 786 214 78.6 % 21.4 %

40

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record 176 124 | b = bad

JNTU WORLD [www.jntuworld.com]

By using pruned model, the accuracy decreased. Therefore by pruning the nodes we can make our decision tree simpler.

12. How can you convert a Decision Tree into if-then-else rules. Make up your own small Decision Tree consisting 2-3 levels and convert into a set of rules. There also exist different classifiers that output the model in the form of rules. One such classifier in weka is rules. PART, train this model and report the set of rules obtained. Sometimes just one attribute can be good enough in making the decision, yes, just one ! Can you predict what attribute that might be in this data set? OneR classifier uses a single attribute to make decisions(it chooses the attribute based on minimum error).Report the rule obtained by training a one R classifier. Rank the performance of j48,PART,oneR. Ans) Sample Decision Tree of 2-3 levles.
Age? senior Middle_aged

youth

Student

Credit_rating

yes

fair

excellent

no

yes

no

yes

Converting Decision tree into a set of rules is as follows. Rule1: If age = youth AND student=yes THEN buys_computer=yes Rule2: If age = youth AND student=no THEN buys_computer=no Rule3: If age = middle_aged THEN buys_computer=yes Rule4: If age = senior AND credit_rating=excellent THEN buys_computer=yes Rule5: If age = senior AND credit_rating=fair THEN buys_computer=no

In Weka GUI Explorer, Select Classify Tab, In that Select Use Training set option .There also exist different classifiers that output the model in the form of Rules. Such classifiers in weka are PART and OneR . Then go to Choose and select Rules in that select PART and press start Button.

41

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record

JNTU WORLD [www.jntuworld.com]

== Evaluation on training set === === Summary === Correctly Classified Instances Incorrectly Classified Instances == Confusion Matrix === a b <-- classified as 653 47 | a = good 56 244 | b = bad Then go to Choose and select Rules in that select OneR and press start Button. == Evaluation on training set === === Summary === Correctly Classified Instances Incorrectly Classified Instances === Confusion Matrix === a b <-- classified as 642 58 | a = good 200 100 | b = bad Then go to Choose and select Trees in that select J48 and press start Button. === Evaluation on training set === === Summary === Correctly Classified Instances Incorrectly Classified Instances === Confusion Matrix === a b <-- classified as 669 31 | a = good 114 186 | b = bad Note: With this observation we have seen the performance of classifier and Rank is as follows 1. PART 2. J48 3. OneR 855 145 85.5 % 14.5 % 742 258 74.2 % 25.8 % 897 103 89.7 % 10.3 %

42

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record

JNTU WORLD [www.jntuworld.com]

VIVA VOICE QUESTIONS AND ANSWERS


1.Define Data mining. It refers to extracting or mining knowledge from large amount of data. Data mining is a process of discovering interesting knowledge from large amounts of data stored either, in database, data warehouse, or other information repositories 2.Give some alternative terms for data mining. Knowledge mining Knowledge extraction Data/pattern analysis. Data Archaeology Data dredging 3.What is KDD. KDD-Knowledge Discovery in Databases. 4.What are the steps involved in KDD process. Data cleaning Data Mining Pattern Evaluation Knowledge Presentation Data Integration Data Selection Data Transformation 5.What is the use of the knowledge base? Knowledge base is domain knowledge that is used to guide search or evaluate the interestingness of resulting pattern. Such knowledge can include concept hierarchies used to organize attribute /attribute values in to different levels ofabstraction. 6.Mention some of the data mining techniques. Statistics Machine learning Decision Tree Hidden markov models Artificial Intelligence Genetic Algorithm Meta learning 7.Give few statistical techniques. Point Estimation Data Summarization Bayesian Techniques Testing Hypothesis Correlation Regression

43

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record

JNTU WORLD [www.jntuworld.com]

8.What is meta learning. Concept of combining the predictions made from multiple models of data ining and analyzing those predictions to formulate a new and previously unknown prediction. 9.Define Genetic algorithm. Search algorithm. Enables us to locate optimal binary string by processing an initial random population of binary strings by performing operations such as artificial mutation , crossover and selection. 11.What is the purpose of Data mining Technique? It provides a way to use various data mining tasks. 12.Define Predictive model. It is used to predict the values of data by making use of known results from a different set of sample data. 13.Data mining tasks that are belongs to predictive model Classification Regression Time series analysis 14.Define descriptive model It is used to determine the patterns and relationships in a sample data. Data mining tasks that belongs to descriptive model: Clustering Summarization Association rules Sequence discovery 15. Define the term summarization The summarization of a large chunk of data contained in a web page or a document. Summarization = characterization=generalization. 16. List out the advanced database systems. Extended-relational databases Object-oriented databases Deductive databases Spatial databases Temporal databases Multimedia databases Active databases Scientific databases Knowledge databases 17. Define cluster analysis Cluster analyses data objects without consulting a known class label. The class labels are not present in the training data simply because they are not known to begin with.

44

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record

JNTU WORLD [www.jntuworld.com]

18.Classifications of Data mining systems. Based on the kinds of databases mined: o According to model _ Relational mining system _ Transactional mining system _ Object-oriented mining system _ Object-Relational mining system _ Data warehouse mining system o Types of Data _ Spatial data mining system _ Time series data mining system _ Text data mining system _ Multimedia data mining system Based on kinds of Knowledge mined o According to functionalities _ Characterization _ Discrimination _ Association _ Classification _ Clustering _ Outlier analysis _ Evolution analysis o According to levels of abstraction of the knowledge mined _ Generalized knowledge (High level of abstraction) _ Primitive-level knowledge (Raw data level) o According to mine data regularities versus mine data irregularities Based on kinds of techniques utilized o According to user interaction _ Autonomous systems _ Interactive exploratory system _ Query-driven systems o According to methods of data analysis _ Database-oriented _ Data warehouse-oriented _ Machine learning _ Statistics _ Visualization _ Pattern recognition _ Neural networks Based on applications adopted o Finance o Telecommunication o DNA o Stock markets o E-mail and so on

45

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record

JNTU WORLD [www.jntuworld.com]

19.Describe challenges to data mining regarding data mining methodology and user interaction issues. Mining different kinds of knowledge in databases Interactive mining of knowledge at multiple levels of abstraction Incorporation of background knowledge Data mining query languages and ad hoc data mining Presentation and visualization of data mining results Handling noisy or incomplete data Pattern evaluation 20.Describe challenges to data mining regarding performance issues. Efficiency and scalability of data mining algorithms Parallel, distributed, and incremental mining algorithms 21.Describe issues relating to the diversity of database types. Handling of relational and complex types of data Mining information from heterogeneous databases and global information systems 22.What is meant by pattern? Pattern represents knowledge if it is easily understood by humans; valid on test data with some degree of certainty; and potentially useful, novel,or validates a hunch about which the used was curious. Measures of pattern interestingness, either objective or subjective, can be used to guide the discovery process. 23.How is a data warehouse different from a database? Data warehouse is a repository of multiple heterogeneous data sources, organized under a unified schema at a single site in order to facilitate management decision-making. Database consists of a collection of interrelated data. 1. Define Association Rule Mining. Association rule mining searches for interesting relationships among items in a given data set 2. When we can say the association rules are interesting? Association rules are considered interesting if they satisfy both a minimum support threshold and a minimum confidence threshold. Users or domain experts can set such thresholds. 3. Explain Association rule in mathematical notations. Let I-{i1,i2,..,im} be a set of items Let D, the task relevant data be a set of database transaction T is a set of Items An association rule is an implication of the form A=>B where A C I, B C I, and An B=f. The rule A=>B contains in the transaction set D with support s, where s is the percentage of transactions in D that contain AUB. The Rule A=> B has confidence c in the transaction set D if c is the percentage of transactions in D containing A that also contain B. 4. Define support and confidence in Association rule mining. Support S is the percentage of transactions in D that contain AUB.

46

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record

JNTU WORLD [www.jntuworld.com]

Confidence c is the percentage of transactions in D containing A that also contain B. Support ( A=>B)= P(AUB) Confidence (A=>B)=P(B/A) 5. How are association rules mined from large databases? I step: Find all frequent item sets: II step: Generate strong association rules from frequent item sets 6. Describe the different classifications of Association rule mining. Based on types of values handled in the Rule i. Boolean association rule ii. Quantitative association rule Based on the dimensions of data involved i. Single dimensional association rule ii. Multidimensional association rule Based on the levels of abstraction involved i. Multilevel association rule ii. Single level association rule Based on various extensions i. Correlation analysis 7. What is the purpose of Apriori Algorithm? Apriori algorithm is an influential algorithm for mining frequent item sets for Boolean association rules. The name of the algorithm is based on the fact that the algorithm uses prior knowledge of frequent item set properties. 8. Define anti-monotone property. If a set cannot pass a test, all of its supersets will fail the same test as well. 9. How to generate association rules from frequent item sets? Association rules can be generated as follows For each frequent item set1, generate all non empty subsets of 1. For every non empty subsets s of 1, output the rule S=>(1-s) If Support count(1) = min_conf, Support_count(s) Where min_conf is the minimum confidence threshold. 10. Give few techniques to improve the efficiency of Apriori algorithm. Hash based technique Transaction Reduction Portioning Sampling Dynamic item counting 11. What are the things suffering the performance of Apriori candidate generation technique. Need to generate a huge number of candidate sets Need to repeatedly scan the scan the database and check a large set of

47

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record candidates by pattern matching

JNTU WORLD [www.jntuworld.com]

12. Describe the method of generating frequent item sets without candidate generation. Frequent-pattern growth(or FP Growth) adopts divide-and-conquer strategy. Steps: Compress the database representing frequent items into a frequent pattern tree or FP tree Divide the compressed database into a set of conditional database Mine each conditional database separately 13. Define Iceberg query. It computes an aggregate function over an attribute or set of attributes in order to find aggregate values above some specified threshold. Given relation R with attributes a1,a2,..,an and b, and an aggregate function, agg_f, an iceberg query is the form Select R.a1,R.a2,..R.an,agg_f(R,b) From relation R Group by R.a1,R.a2,.,R.an Having agg_f(R.b)>=threshold 14. Mention few approaches to mining Multilevel Association Rules Uniform minimum support for all levels(or uniform support) Using reduced minimum support at lower levels(or reduced support) Level-by-level independent Level-cross filtering by single item Level-cross filtering by k-item set 15. What are multidimensional association rules? Association rules that involve two or more dimensions or predicates Interdimension association rule: Multidimensional association rule with no repeated predicate or dimension Hybrid-dimension association rule: Multidimensional association rule with multiple occurrences of some predicates or dimensions. 16. Define constraint-Based Association Mining. Mining is performed under the guidance of various kinds of constraints provided by the user. The constraints include the following Knowledge type constraints Data constraints Dimension/level constraints Interestingness constraints Rule constraints. 17. Define the concept of classification. Two step process A model is built describing a predefined set of data classes or concepts. The model is constructed by analyzing database tuples described by attributes.

48

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record The model is used for classification.

JNTU WORLD [www.jntuworld.com]

18. What is Decision tree? A decision tree is a flow chart like tree structures, where each internal node denotes a test on an attribute, each branch represents an outcome of the test, and leaf nodes represent classes or class distributions. The top most in a tree is the root node. 19. What is Attribute Selection Measure? The information Gain measure is used to select the test attribute at each node in the decision tree. Such a measure is referred to as an attribute selection measure or a measure of the goodness of split. 20. Describe Tree pruning methods. When a decision tree is built, many of the branches will reflect anomalies in the training data due to noise or outlier. Tree pruning methods address this problem of over fitting the data. Approaches: Pre pruning Post pruning 21. Define Pre Pruning A tree is pruned by halting its construction early. Upon halting, the node becomes a leaf. The leaf may hold the most frequent class among the subset samples. 22. Define Post Pruning. Post pruning removes branches from a Fully grown tree. A tree node is pruned by removing its branches. Eg: Cost Complexity Algorithm 23. What is meant by Pattern? Pattern represents the knowledge. 24. Define the concept of prediction. Prediction can be viewed as the construction and use of a model to assess the class of an unlabeled sample or to assess the value or value ranges of an attribute that a given sample is likely to have. 1.Define Clustering? Clustering is a process of grouping the physical or conceptual data object into clusters. 2. What do you mean by Cluster Analysis? A cluster analysis is the process of analyzing the various clusters to organize the different objects into meaningful and descriptive objects. 3. What are the fields in which clustering techniques are used? Clustering is used in biology to develop new plants and animal taxonomies. Clustering is used in business to enable marketers to develop new distinct groups of their customers and characterize the customer group on basis of purchasing. Clustering is used in the identification of groups of automobiles Insurance policy customer.

49

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record

JNTU WORLD [www.jntuworld.com]

Clustering is used in the identification of groups of house in a city on the basis of house type, their cost and geographical location. Clustering is used to classify the document on the web for information discovery. 4.What are the requirements of cluster analysis? The basic requirements of cluster analysis are Dealing with different types of attributes. Dealing with noisy data. Constraints on clustering. Dealing with arbitrary shapes. High dimensionality Ordering of input data Interpretability and usability Determining input parameter and Scalability 5.What are the different types of data used for cluster analysis? The different types of data used for cluster analysis are interval scaled, binary, nominal, ordinal and ratio scaled data. 6. What are interval scaled variables? Interval scaled variables are continuous measurements of linear scale. For example, height and weight, weather temperature or coordinates for any cluster. These measurements can be calculated using Euclidean distance or Minkowski distance. 7. Define Binary variables? And what are the two types of binary variables? Binary variables are understood by two states 0 and 1, when state is 0, variable is absent and when state is 1, variable is present. There are two types of binary variables, symmetric and asymmetric binary variables. Symmetric variables are those variables that have same state values and weights. Asymmetric variables are those variables that have not same state values and weights. 8. Define nominal, ordinal and ratio scaled variables? A nominal variable is a generalization of the binary variable. Nominal variable has more than two states, For example, a nominal variable, color consists of four states, red, green, yellow, or black. In Nominal variables the total number of states is N and it is denoted by letters, symbols or integers. An ordinal variable also has more than two states but all these states are ordered in a meaningful sequence. A ratio scaled variable makes positive measurements on a non-linear scale, such as exponential scale, using the formula AeBt or Ae-Bt ,Where A and B are constants. 9. What do u mean by partitioning method? In partitioning method a partitioning algorithm arranges all the objects into various partitions, where the total number of partitions is less than the total number of objects. Here each partition represents a cluster. The two types of partitioning method are k-means and k-medoids. 10. Define CLARA and CLARANS?

50

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record

JNTU WORLD [www.jntuworld.com]

Clustering in LARge Applications is called as CLARA. The efficiency of CLARA depends upon the size of the representative data set. CLARA does not work properly if any representative data set from the selected representative data sets does not find best k-medoids. To recover this drawback a new algorithm, Clustering Large Applications based upon RANdomized search (CLARANS) is introduced. The CLARANS works like CLARA, the only difference between CLARA and CLARANS is the clustering process that is done after selecting the representative data sets. 11. What is Hierarchical method? Hierarchical method groups all the objects into a tree of clusters that are arranged in a hierarchical order. This method works on bottom-up or top-down approaches. 12. Differentiate Agglomerative and Divisive Hierarchical Clustering? Agglomerative Hierarchical clustering method works on the bottom-up approach. In Agglomerative hierarchical method, each object creates its own clusters. The single Clusters are merged to make larger clusters and the process of merging continues until all the singular clusters are merged into one big cluster that consists of all the objects. Divisive Hierarchical clustering method works on the top-down approach. In this method all the objects are arranged within a big singular cluster and the large cluster is continuously divided into smaller clusters until each cluster has a single object. 13. What is CURE? Clustering Using Representatives is called as CURE. The clustering algorithms generally work on spherical and similar size clusters. CURE overcomes the problem of spherical and similar size cluster and is more robust with respect to outliers. 14. Define Chameleon method? Chameleon is another hierarchical clustering method that uses dynamic modeling. Chameleon is introduced to recover the drawbacks of CURE method. In this method two clusters are merged, if the interconnectivity between two clusters is greater than the interconnectivity between the objects within a cluster. 15. Define Density based method? Density based method deals with arbitrary shaped clusters. In density-based method, clusters are formed on the basis of the region where the density of the objects is high. 16. What is a DBSCAN? Density Based Spatial Clustering of Application Noise is called as DBSCAN. DBSCAN is a density based clustering method that converts the high-density objects regions into clusters with arbitrary shapes and sizes. DBSCAN defines the cluster as a maximal set of density connected points. 17. What do you mean by Grid Based Method? In this method objects are represented by the multi resolution grid data structure. All the objects are quantized into a finite number of cells and the collection of cells build the grid structure of objects. The clustering operations are performed on that grid structure. This method is widely used because its processing time is very fast and that is independent of number of objects. 18. What is a STING?

51

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record

JNTU WORLD [www.jntuworld.com]

Statistical Information Grid is called as STING; it is a grid based multi resolution clustering method. In STING method, all the objects are contained into rectangular cells, these cells are kept into various levels of resolutions and these levels are arranged in a hierarchical structure. 19. Define Wave Cluster? It is a grid based multi resolution clustering method. In this method all the objects are represented by a multidimensional grid structure and a wavelet transformation is applied for finding the dense region. Each grid cell contains the information of the group of objects that map into a cell. A wavelet transformation is a process of signaling that produces the signal of various frequency sub bands. 20. What is Model based method? For optimizing a fit between a given data set and a mathematical model based methods are used. This method uses an assumption that the data are distributed by probability distributions. There are two basic approaches in this method that are 1. Statistical Approach 2. Neural Network Approach. 21. What is the use of Regression? Regression can be used to solve the classification problems but it can also be used for applications such as forecasting. Regression can be performed using many different types of techniques; in actually regression takes a set of data and fits the data to a formula. 22. What are the reasons for not using the linear regression model to estimate the output data? There are many reasons for that, One is that the data do not fit a linear model, It is possible however that the data generally do actually represent a linear model, but the linear model generated is poor because noise or outliers exist in the data. Noise is erroneous data and outliers are data values that are exceptions to the usual and expected data. 23. What are the two approaches used by regression to perform classification? Regression can be used to perform classification using the following approaches 1. Division: The data are divided into regions based on class. 2. Prediction: Formulas are generated to predict the output class value.

24. What do u mean by logistic regression? Instead of fitting a data into a straight line logistic regression uses a logistic curve. The formula for the univariate logistic curve is P= e (C0+C1X1) 1+e (C0+C1X1) The logistic curve gives a value between 0 and 1 so it can be interpreted as the probability of class membership. 25. What is Time Series Analysis? A time series is a set of attribute values over a period of time. Time Series Analysis may be viewed as finding patterns in the data and predicting future values.

52

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record

JNTU WORLD [www.jntuworld.com]

26. What are the various detected patterns? Detected patterns may include: Trends : It may be viewed as systematic non-repetitive changes to the values over time. Cycles : The observed behavior is cyclic. Seasonal : The detected patterns may be based on time of year or month or day. Outliers : To assist in pattern detection , techniques may be needed to remove or reduce the impact of outliers. 27. What is Smoothing? Smoothing is an approach that is used to remove the nonsystematic behaviors found in time series. It usually takes the form of finding moving averages of attribute values. It is used to filter out noise and outliers. 28. Give the formula for Pearsons r One standard formula to measure correlation is the correlation coefficient r, sometimes called Pearsons r. Given two time series, X and Y with means X and Y, each with n elements, the formula for r is S (xi X) (yi Y) (S (xi X)2 S(yi Y)2)1/2 29. What is Auto regression? Auto regression is a method of predicting a future time series value by looking at previous values. Given a time series X = (x1,x2,.xn) a future value, x n+1, can be found using x n+1 = x + j nx n + j n-1x n-1 ++ e n+1 Here e n+1 represents a random error, at time n+1.In addition, each element in the time series can be viewed as a combination of a random error and a linear combination of previous values. 1.Define data warehouse? A data warehouse is a repository of multiple heterogeneous data sources organized under a unified schema at a single site to facilitate management decision making . (or) A data warehouse is a subject-oriented, time-variant and nonvolatile collection of data in support of managements decision-making process. 2.What are operational databases? Organizations maintain large database that are updated by daily transactions are called operational databases. 3.Define OLTP? If an on-line operational database systems is used for efficient retrieval, efficient storage and management of large amounts of data, then the system is said to be on-line transaction processing. 4.Define OLAP? Data warehouse systems serves users (or) knowledge workers in the role of data analysis and decision-making. Such systems can organize and present data in various formats. These systems are known as on-line analytical processing systems.

53

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record

JNTU WORLD [www.jntuworld.com]

5.How a database design is represented in OLTP systems? Entity-relation model 6. How a database design is represented in OLAP systems? Star schema Snowflake schema Fact constellation schema 7.Write short notes on multidimensional data model? Data warehouses and OLTP tools are based on a multidimensional data model. This model is used for the design of corporate data warehouses and department data marts. This model contains a Star schema, Snowflake schema and Fact constellation schemas. The core of the multidimensional model is the data cube. 8.Define data cube? It consists of a large set of facts (or) measures and a number of dimensions. 9.What are facts? Facts are numerical measures. Facts can also be considered as quantities by which we can analyze the relationship between dimensions. 10.What are dimensions? Dimensions are the entities (or) perspectives with respect to an organization for keeping records and are hierarchical in nature. 11.Define dimension table? A dimension table is used for describing the dimension. (e.g.) A dimension table for item may contain the attributes item_ name, brand and type. 12.Define fact table? Fact table contains the name of facts (or) measures as well as keys to each of the related dimensional tables. 13.What are lattice of cuboids? In data warehousing research literature, a cube can also be called as cuboids. For different (or) set of dimensions, we can construct a lattice of cuboids, each showing the data at different level. The lattice of cuboids is also referred to as data cube. 14.What is apex cuboid? The 0-D cuboid which holds the highest level of summarization is called the apex cuboid. The apex cuboid is typically denoted by all. 15.List out the components of star schema? _ A large central table (fact table) containing the bulk of data with no redundancy. _ A set of smaller attendant tables (dimension tables), one for each dimension.

54

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record

JNTU WORLD [www.jntuworld.com]

16.What is snowflake schema? The snowflake schema is a variant of the star schema model, where some dimension tables are normalized thereby further splitting the tables in to additional tables. 17.List out the components of fact constellation schema? This requires multiple fact tables to share dimension tables. This kind of schema can be viewed as a collection of stars and hence it is known as galaxy schema (or) fact onstellation schema. 18.Point out the major difference between the star schema and the snowflake schema? The dimension table of the snowflake schema model may be kept in normalized form to reduce redundancies. Such a table is easy to maintain and saves storage space. 19.Which is popular in the data warehouse design, star schema model (or) snowflake schema model? Star schema model, because the snowflake structure can reduce the effectiveness and more joins will be needed to execute a query. 20.Define concept hierarchy? A concept hierarchy defines a sequence of mappings from a set of low-level concepts to higher-level concepts. 21.Define total order? If the attributes of a dimension which forms a concept hierarchy such as street<city< province_or_state <country, then it is said to be total order. 22.Define partial order? If the attributes of a dimension which forms a lattice such as day<{month<quarter; week}<year, then it is said to be partial order. 23.Define schema hierarchy? A concept hierarchy that is a total (or) partial order among attributes in a database schema is called a schema hierarchy. 24.List out the OLAP operations in multidimensional data model? _ Roll-up _ Drill-down _ Slice and dice _ Pivot (or) rotate 25.What is roll-up operation? The roll-up operation is also called drill-up operation which performs aggregation on a data cube either by climbing up a concept hierarchy for a dimension (or) by dimension reduction. 26.What is drill-down operation? Drill-down is the reverse of roll-up operation. It navigates from less detailed data to more detailed data. Drill-down operation can be taken place by stepping down a concept hierarchy for a dimension.

55

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record

JNTU WORLD [www.jntuworld.com]

27.What is slice operation? The slice operation performs a selection on one dimension of the cube resulting in a sub cube. 28.What is dice operation? The dice operation defines a sub cube by performing a selection on two (or) more dimensions. 29.What is pivot operation? This is a visualization operation that rotates the data axes in an alternative presentation of the data. 30.List out the views in the design of a data warehouse? _ Top-down view _ Data source view _ Data warehouse view _ Business query view 31.What are the methods for developing large software systems? _ Waterfall method _ Spiral method 32.How the operation is performed in waterfall method? The waterfall method performs a structured and systematic analysis at each step before proceeding to the next, which is like a waterfall falling from one step to the next. 33.How the operation is performed in spiral method? The spiral method involves the rapid generation of increasingly functional systems, with short intervals between successive releases. This is considered as a good choice for the data warehouse development especially for data marts, because the turn around time is short, modifications can be done quickly and new designs and technologies can be adapted in a timely manner. 34.List out the steps of the data warehouse design process? _ Choose a business process to model. _ Choose the grain of the business process _ Choose the dimensions that will apply to each fact table record. _ Choose the measures that will populate each fact table record. 35.Define ROLAP? The ROLAP model is an extended relational DBMS that maps operations on multidimensional data to standard relational operations. 36.Define MOLAP? The MOLAP model is a special purpose server that directly implements multidimensional data and operations. 37.Define HOLAP? The hybrid OLAP approach combines ROLAP and MOLAP technology, benefiting from the greater scalability of ROLAP and the faster computation of MOLAP,(i.e.) a HOLAP server may allow large

56

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record

JNTU WORLD [www.jntuworld.com]

volumes of detail data to be stored in a relational database, while aggregations are kept in a separate MOLAP store. 38.What is enterprise warehouse? An enterprise warehouse collects all the informations about subjects spanning the entire organization. It provides corporate-wide data integration, usually from one (or) more operational systems (or) external information providers. It contains detailed data as well as summarized data and can range in size from a few giga bytes to hundreds of giga bytes, tera bytes (or) beyond. An enterprise data warehouse may be implemented on traditional mainframes, UNIX super servers (or) parallel architecture platforms. It requires business modeling and may take years to design and build. 39.What is data mart? Data mart is a database that contains a subset of data present in a data warehouse. Data marts are created to structure the data in a data warehouse according to issues such as hardware platforms and access control strategies. We can divide a data warehouse into data marts after the data warehouse has been created. Data marts are usually implemented on low-cost departmental servers that are UNIX (or) windows/NT based. The implementation cycle of the data mart is likely to be measured in weeks rather than months (or) years. 40.What are dependent and independent data marts? Dependent data marts are sourced directly from enterprise data warehouses. Independent data marts are data captured from one (or) more operational systems (or) external information providers (or) data generated locally with in particular department (or) geographic area. 41.What is virtual warehouse? A virtual warehouse is a set of views over operational databases. For efficient query processing, only some of the possible summary views may be materialized. A virtual warehouse is easy to build but requires excess capability on operational database servers. 42.Define indexing? Indexing is a technique, which is used for efficient data retrieval (or) accessing data in a faster manner. When a table grows in volume, the indexes also increase in size requiring more storage. 43.What are the types of indexing? _ B-Tree indexing Bit map indexing _ Join indexing 44.Define metadata? Metadata is used in data warehouse is used for describing data about data. (i.e.) meta data are the data that define warehouse objects. Metadata are created for the data names and definitions of the given warehouse. 45.Define VLDB? Very Large Data Base. If a database whose size is greater than 100GB, then the database is said to be very large database.

57

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record 1.What are the classifications of tools for data mining? Commercial Tools Public domain Tools Research prototypes

JNTU WORLD [www.jntuworld.com]

2.What are commercial tools? Commercial tools can be defined as the following products and usually are associated with the consulting activity by the same company: 1. Intelligent Miner from IBM 2. SAS System from SAS Institute 3. Thought from Right Information Systems. Etc 3. What are Public domain Tools? Public domain Tools are largely freeware with just registration fees: Brute from University of Washington. MC++ from Stanford university, Stanford, California. 4. What are Research prototypes? Some of the research products may find their way into commercial market: DB Miner from Simon Fraser University, British Columbia, Mining Kernel System from University of Ulster, North Ireland. 5.What is the difference between generic single-task tools and generic multi-task tools? Generic single-task tools generally use neural networks or decision trees. They cover only the data mining part and require extensive pre-processing and postprocessing steps. Generic multi-task tools offer modules for pre-processing and post processing steps and also offer a broad selection of several popular data mining algorithms as clustering. 6. What are the areas in which data warehouses are used in present and in future? The potential subject areas in which data ware houses may be developed at present and also in future are 1.Census data: The registrar general and census commissioner of India decennially compiles information of all individuals, villages, population groups, etc. This information is wide ranging such as the individual slip. A compilation of information of individual households, of which a database of 5%sample is maintained for analysis. A data warehouse can be built from this database upon which OLAP techniques can be applied, Data mining also can be performed for analysis and knowledge discovery 2.Prices of Essential Commodities The ministry of food and civil supplies, Government of India complies daily data for about 300 observation centers in the entire country on the prices of essential commodities such as rice, edible oil etc, A data warehouse can be built for this data and OLAP techniques can be applied for its analysis 7. What are the other areas for Data warehousing and data mining? Agriculture Rural development Health

58

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record Planning Education Commerce and Trade

JNTU WORLD [www.jntuworld.com]

8. Specify some of the sectors in which data warehousing and data mining are used? Tourism Program Implementation Revenue Economic Affairs Audit and Accounts 9. Describe the use of DBMiner. Used to perform data mining functions, including characterization, association, classification, prediction and clustering. 10. Applications of DBMiner. Mining system for both OLAP and data mining in relational database and data warehouses. Used in medium to large relational databases with fast response time. 11. Give some data mining tools. DBMiner GeoMiner Multimedia miner WeblogMiner 12. Mention some of the application areas of data mining DNA analysis Financial data analysis Retail Industry Telecommunication industry Market analysis Banking industry Health care analysis.

13. Differentiate data query and knowledge query A data query finds concrete data stored in a database and corresponds to a basic retrieval statement in a database system. A knowledge query finds rules, patterns and other kinds of knowledge in a database and corresponds to querying database knowledge including deduction rules, integrity constraints, generalized rules, frequent patterns and other regularities. 14.Differentiate direct query answering and intelligent query answering. Direct query answering means that a query answers by returning exactly what is being asked. Intelligent query answering consists of analyzing the intent of query and providing generalized, neighborhood, or associated information relevant to the query.

59

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record

JNTU WORLD [www.jntuworld.com]

15. Define visual data mining Discovers implicit and useful knowledge from large data sets using data and/ or knowledge visualization techniques. Integration of data visualization and data mining. 16. What does audio data mining mean? Uses audio signals to indicate patterns of data or the features of data mining results. Patterns are transformed into sound and music. To identify interesting or unusual patterns by listening pitches, rhythms, tune and melody. Steps involved in DNA analysis Semantic integration of heterogeneous, distributed genome databases Similarity search and comparison among DNA sequences Association analysis: Identification of cooccuring gene sequences Path analysis: Linking genes to different stages of disease development Visualization tools and genetic data analysis 17.What are the factors involved while choosing data mining system? Data types System issues Data sources Data Mining functions and methodologies Coupling data mining with database and/or data warehouse systems Scalability Visualization tools Data mining query language and graphical user interface. 18. Define DMQL Data Mining Query Language It specifies clauses and syntaxes for performing different types of data mining tasks for example data classification, data clustering and mining association rules. Also it uses SQl-like syntaxes to mine databases.

19. Define text mining Extraction of meaningful information from large amounts free format textual data. Useful in Artificial intelligence and pattern matching Also known as text mining, knowledge discovery from text, or content analysis. 20. What does web mining mean Technique to process information available on web and search for useful data. To discover web pages, text documents , multimedia files, images, and other types of resources from web. Used in several fields such as E-commerce, information filtering, fraud detection and education and research. 21.Define spatial data mining. Extracting undiscovered and implied spatial information. Spatial data: Data that is associated with a location Used in several fields such as geography, geology, medical imaging etc. 22. Explain multimedia data mining.

60

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Data Mining Lab Record

JNTU WORLD [www.jntuworld.com]

Mines large data bases. Does not retrieve any specific information from multimedia databases Derive new relationships , trends, and patterns from stored multimedia data mining. Used in medical diagnosis, stock markets ,Animation industry, Airline industry, Traffic management systems, Surveillance systems etc.

REFERENCES
1. Weka Reference, http://www.gnu.org/copyleft/gpl.html 2. Prefuse Visualization Toolkit. See http://prefuse.org/ for more information on the project 3. WekaWiki http://weka.wikispaces.com/ 4. Extensions for Wekas main GUI on WekaWiki http://weka.wikispaces.com/Extensions+for+Weka%27s+main+GUI 5. Adding tabs in the Explorer on WekaWiki http://weka.wikispaces.com/Adding+tabs+in+the+Explorer 6. Explorer visualization plugins on WekaWiki http://weka.wikispaces.com/Explorer+visualization+plugins

61

Bharath Annamaneni [JNTU WORLD]

www.jntuworld.com