You are on page 1of 136

ICADABAI 2009 – Abstracts

TABLE OF CONTENTS

SCHEDULE –ICADABAI 2009 .................................................................................................................. 7


BUSINESS ANALYTICS – A TIME FOR INTROSPECTION ............................................................ 16
GEOMETRIC CONVERGENCE OF THE HAAR PX-DA ALGORITHM FOR THE BAYESIAN
MULTIVARIATE REGRESSION MODEL WITH STUDENT T ERRORS ...................................... 18
MULTI-TREATMENT LOCATION-INVARIANT OPTIMAL RESPONSE-ADAPTIVE DESIGNS
FOR CONTINUOUS RESPONSES ......................................................................................................... 19
STATISTICAL ISSUES WITH SURROGATE ENDPOINTS TO ESTIMATE THE DIFFERENCE
OF TREATMENT EFFECTS................................................................................................................... 20
CONDITIONAL INFERENCES AND LARGE SAMPLE TESTS FOR INTENSITY
PARAMETERS IN POWER LAW PROCESS ....................................................................................... 21
STOCK PRICE AND MACROECONOMIC INDICATORS IN INDIA: EVIDENCE FROM
CAUSALITY AND COINTEGRATION ANALYSIS ............................................................................ 22
STOCK PRICE RETURN DISTRIBUTION: NON-GAUSSIAN VS. GAUSSIAN- AN EMPIRICAL
EXAMINATION ........................................................................................................................................ 23
SKEW-ELLIPTICALITY IN HEDGE FUND RETURNS: WHICH IS THE BEST FIT
DISTRIBUTION? ...................................................................................................................................... 24
A CASE STUDY - TO PRIORITIZE THE INFORMATION MANAGEMENT REGISTER (IMR)
ISSUES USES ∆RWA (RISK WEIGHTED ASSETS) APPROACH .................................................... 25
CLOSENESS BETWEEN HEURISTIC AND OPTIMUM SELECTIONS OF PORTFOLIO: AN
EMPIRICAL ANALYSIS.......................................................................................................................... 26
DECISION ANALYTICS: THE CHALLENGE OF LEVERAGING THE TRANSDUCTION OF
PROCESSES............................................................................................................................................... 27
CLUSTERING OF INFLAMMATORY SKIN DISEASE PATIENTS USING LATENT CLASS
ANALYSIS.................................................................................................................................................. 28
IMPROVING MAXIMUM MARGIN CLUSTERING THROUGH SPAN OF SUPPORT VECTORS
MINIMIZATION ....................................................................................................................................... 29
PROBABILISTIC IDENTIFICATION OF DEFECTS IN AN INDUSTRIAL PROCESS USING
LEVEL CROSSING TECHNIQUES ....................................................................................................... 30
ON BUILDING INFORMATION WAREHOUSES............................................................................... 31
A GENERALIZED FRAMEWORK FOR ESTIMATING CUSTOMER LIFETIME VALUE WHEN
CUSTOMER LIFETIMES ARE NOT OBSERVED .............................................................................. 32
A SEGMENTATION APPROACH USING CUSTOMER LIFETIME VALUE: INSIGHTS FOR
CUSTOMER RELATIONSHIP MANAGEMENT ................................................................................ 33
DOUBLE JEOPARDY DIAGNOSTICS: A TOOL TO UNDERSTAND MARKET DYNAMICS ... 34
COMPELLING SIGNALS: COMPETITIVE POSITIONING RESPONSES TO SERVICE MARK
FILINGS ..................................................................................................................................................... 35
USING LISREL FOR STRUCTURAL EQUATION SUB-MODELS .................................................. 36

2
ICADABAI 2009 – Abstracts
COVERING BASED ROUGH SET APPROACH TO UNCERTAINTY MANAGEMENT IN
DATABASES .............................................................................................................................................. 37
REAL TIME SPIKE DETECTION FROM MICRO ELECTRODE ARRAY RECORDINGS USING
WAVELET DENOISING AND THRESHOLDING............................................................................... 38
MOTIF FINDING USING DNA DATA COMPRESSION .................................................................... 39
AN APPROACH OF SUMMARIZATION OF HINDI TEXT BY EXTRACTION ............................ 40
FORMAL MODELING OF DIGITAL RIGHTS MANAGEMENT FOR SUSTAINABLE
DEVELOPMENT OF E-COMMERCE................................................................................................... 41
RECOVERY RATE MODELING FOR CONSUMER LOAN PORTFOLIO ..................................... 42
THE PROACTIVE PRICING MODEL- USING FORECASTED PRICE ESCALATION
FUNCTION................................................................................................................................................. 43
BEHAVIOURAL SEGMENTATION OF CREDIT CARD CUSTOMERS ........................................ 44
PRECISION TARGETING MODELS FOR IMPROVING ROI OF DIRECT MARKETING
INTERVENTIONS..................................................................................................................................... 45
CUSTOMER PURCHASE BEHAVIOUR PREDICTION APPROACH FOR MANAGING THE
CUSTOMER FAVOURITES LIST ON A GROCERY E-COMMERCE PORTAL........................... 46
PRODUCT INVENTORY MANAGEMENT AT BPCL & EFFECTIVE AND EFFICIENT
DISTRIBUTION OF PRODUCTS TO DEMAND CENTERS.............................................................. 47
INDIAN MUTUAL FUNDS PERFORMANCE: 1999-2008................................................................... 48
HOUSEHOLD MEAT DEMAND IN INDIA – A SYSTEMS APPROACH USING MICRO LEVEL
DATA........................................................................................................................................................... 49
THE LEAD-LAG RELATIONSHIP BETWEEN NIFTY SPOT AND NIFTY FUTURES: AN
INTRADAY ANALYSIS ........................................................................................................................... 50
CAN ETF ARBITRAGE BE EXTENDED TO SECTOR TRADING? ................................................ 51
DEVELOPMENT OF EMOTIONAL LABOUR SCALE IN INDIAN CONTEXT ............................ 52
WOMEN IN SMALL BUSINESSES: A STUDY OF ENTREPRENEURIAL ISSUES ...................... 53
EMPLOYEES PERCEPTION OF THE FACTORS INFLUENCING TRAINING
EFFECTIVENESS ..................................................................................................................................... 54
ONE SHOE DOESN’T FIT ALL: AN INVESTIGATION INTO THE PROCESSES THAT LEAD
TO SUCCESS IN DIFFERENT TYPES OF ENTREPRENEURS........................................................ 55
USE OF ANALYTICS IN INDIAN ENTERPRISES: A SURVEY ...................................................... 56
USING DATA TO MAKE GOOD MANAGEMENT DECISIONS ...................................................... 57
ENHANCING BUSINESS DECISIONS THROUGH DATA ANALYTICS AND USE OF GIS ....... 58
A BUSINESS APPLICATION .................................................................................................................. 58
TRENDS IN TECHNICAL PROGRESS IN INDIA, 1968 TO 2003 ..................................................... 59
TERRORIST ATTACK & CHANGES IN THE PRICE OF THE UNDERLYING OF INDIAN
DEPOSITORIES ........................................................................................................................................ 60
CO-INTEGRATION OF US & INDIAN STOCK INDEXES ................................................................ 61
A COMMON FINANCIAL PERFORMANCE APPRAISAL MODEL FOR EVALUATING
DISTRICT CENTRAL COOPERATIVE BANKS ................................................................................. 62

3
ICADABAI 2009 – Abstracts
ANALYSIS OF RENDERING TECHNIQUES FOR THE PERCEPTION OF 3D SHAPES ............ 63
MMER: AN ALGORITHM FOR CLUSTERING CATEGORICAL DATA USING ROUGH SET
THEORY..................................................................................................................................................... 64
ROLE OF FORECASTING IN DECISION MAKING SCIENCE ....................................................... 65
BULLWHIP DIMINUTION USING CONTROL ENGINEERING ..................................................... 66
AUTOMATIC DETECTION OF CLUSTERS ....................................................................................... 67
REVENUE MANAGEMENT ................................................................................................................... 68
DATA ANALYSIS USING SAS IN RETAIL SECTOR......................................................................... 69
SEGMENTING THE APPAREL CONSUMERS IN THE ORGANIZED RETAIL MARKET........ 70
THE IMPACT OF PSYCHOGRAPHICS ON THE FOOTWEAR PURCHASE OF YOUTH:
IMPLICATIONS FOR THE MANUFACTURERS TO REPOSITION THEIR PRODUCTS. ......... 71
FACTOR ANALYTICAL APPROACH FOR SITE SELECTION OF RETAIL OUTLET - A CASE
STUDY ........................................................................................................................................................ 72
A STATISTICAL ANALYSIS FOR UNDERSTANDING MOBILE PHONE USAGE PATTERN
AMONG COLLEGE-GOERS IN THE DISTRICT OF KACHCHH, GUJARAT.............................. 73
EXPLORING THE FACTORS AFFECTING THE MIGRATION FROM TRADITIONAL
BANKING CHANNELS TO ALTERNATE BANKING CHANNELS (INTERNET BANKING,
ATM) ........................................................................................................................................................... 74
WEATHER BUSINESS IN INDIA – POTENTIAL & CHALLENGES............................................... 75
UNDERSTANDING OF HAPPINESS AMONG INDIAN YOUTH: A QUALITATIVE APPROACH
...................................................................................................................................................................... 76
ANALYTICAL APPROACH FOR CREDIT ASSESSMENT OF MICROFINANCE BORROWERS
...................................................................................................................................................................... 77
DATA MINING & BUSINESS INTELLIGENCE IN HEALTHCARE ............................................... 78
BUSINESS INTELLIGENCE IN CUSTOMER RELATIONSHIP MANAGEMENT, A SYNERGY
FOR THE RETAIL BANKING INDUSTRY .......................................................................................... 79
‘COMPETITIVE INTELLIGENCE’ IN PRICING ANALYTICS ....................................................... 81
RETAIL ANALYTICS AND ‘LIFESTYLE NEEDS’ SEGMENTATIONS ........................................ 82
REVENUE/PROFIT MANAGEMENT IN POWER STATIONS BY MERIT ORDER OPERATION
...................................................................................................................................................................... 83
HOW TO HANDLE MULTIPLE UNSYSTEMATIC SHOCKS TO A TIME SERIES
FORECASTING SYSTEM - AN APPLICATION TO RETAIL SALES FORECASTING ............... 84
A MODEL USING SCIENTIFIC METHOD TO CUT DOWN COSTS BY EFFICIENT DESIGN OF
SUPPLY CHAIN IN POWER SECTOR ................................................................................................. 85
CLUSTERING AS A BUSINESS INTELLIGENCE TOOL ................................................................. 86
VALIDATING SERVICE CONVENIENCE SCALE AND PROFILING CUSTOMERS: A STUDY
IN THE INDIAN RETAIL CONTEXT.................................................................................................... 87
A MODEL FOR CLASSIFICATION AND PRIORITIZATION OF CUSTOMER
REQUIREMENTS IN THE VALUE CHAIN OF INSURANCE INDUSTRY..................................... 88

4
ICADABAI 2009 – Abstracts
ON THE FOLLY OF REWARDING WITHOUT MEASURING: A CASE STUDY ON
PERFORMANCE APPRAISAL OF SALES OFFICERS AND SALES MANAGERS IN A
PHARMACEUTICAL COMPANY ......................................................................................................... 89
THE FORMAT OR THE STORE. HOW BUYERS MAKE THEIR CHOICE?................................. 90
CONSUMER INVOLVEMENT FOR DURABLE AND NON DURABLE PRODUCT: KEY
INDICATORS AND IT’S IMPACT ......................................................................................................... 91
DEVELOPMENT OF UTILITY FUNCTION FOR LIFE INSURANCE BUYERS IN THE INDIAN
MARKET .................................................................................................................................................... 92
A RIDIT APPROACH TO EVALUATE THE VENDOR PERCEPTION TOWARDS BIDDING
PROCESS IN A VENDOR-VENDEE RELATIONSHIP....................................................................... 93
LINEAR PROBABILISTIC APPROACH TO FLEET SIZE OPTIMISATION ................................ 94
OPTIMISATION OF MANUFACTURING LEAD TIME IN AN ENGINE VALVE
MANUFACTURING COMPANY USING ECRS TECHNIQUE ......................................................... 95
EFFICIENT DECISIONS USING CREDIT SCORING MODELS...................................................... 96
IMPROVING PREDICTIVE POWER OF BINARY RESPONCE MODEL USING MULTI STEP
LOGISTIC APPROACH........................................................................................................................... 97
NET OPINION IN A BOX ........................................................................................................................ 98
USING INVESTIGATIVE ANALYTICS & MARKET-MIX MODELS FOR BUSINESS RULE &
STRATEGY FORMULATION – A CPG CASE STUDY ...................................................................... 99
IMPROVE DISPATCH CAPACITY OF CENTRAL PHARMACY.................................................. 100
APPLICATION OF NEURAL NETWORKS IN STATISTICAL CONTROL CHARTS FOR
PROCESS QUALITY CONTROL ......................................................................................................... 101
MEASUREMENT OF RISK AND IPO UNDERPRICE...................................................................... 102
EFFICIENCY OF MICROFINANCE INSTITUTIONS IN INDIA.................................................... 103
MEASURING EFFICIENCY OF INDIAN RURAL BANKS USING DATA ENVELOPMENT
ANALYSIS................................................................................................................................................ 104
RANKING R&D INSTITUTIONS: A DEA STUDY IN THE INDIAN CONTEXT ......................... 105
A NEW FILTERING APPROACH TO CREDIT RISK ...................................................................... 106
VOLATILITY OF EURODOLLAR FUTURES AND GAUSSIAN HJM TERM STRUCTURE
MODELS................................................................................................................................................... 107
WAVELET BASED VOLATILITY CLUSTERING ESTIMATION OF FOREIGN EXCHANGE
RATES....................................................................................................................................................... 108
MODELLING MULTIVARIATE GARCH MODELS WITH R: THE CCGARCH PACKAGE... 109
WIND ENERGY: MODELS AND INFERENCE ................................................................................. 110
FIELD DATA ANALYSIS - A DRIVER FOR BUSINESS INTELLIGENCE AND PROACTIVE
CUSTOMER ORIENTED APPROACH ............................................................................................... 111
SIMPLE ALGORITHMS FOR PEAK DETECTION IN TIME-SERIES ......................................... 112
USING THE DECISION TREE APPROACH FOR SEGMENTATION ANALYSIS – AN
ANALYTICAL OVERVIEW.................................................................................................................. 113
NOVEL BUSINESS APPLICATION - BUSINESS ANALYTICS..................................................... 114

5
ICADABAI 2009 – Abstracts
SERVICE QUALITY EVALUATION ON OCCUPATIONAL HEALTH IN FISHING SECTOR
USING GREY RELATIONAL ANALYSIS TO LIKERT SCALE SURVEYS ................................. 115
AN EMPIRICAL STUDY ON PERCEPTION OF CONSUMER IN INSURANCE SECTOR........ 116
TWO COMPONENT CUSTOMER RELATIONSHIP MANAGEMENT MODEL FOR HEALTH
CARE SERVICES.................................................................................................................................... 118
AN ANALYTICAL STUDY OF THE EFFECT OF ADVERTISEMENT ON THE CONSUMERS
OF MIDDLE SIZE TOWN ..................................................................................................................... 119
EMPIRICAL FRAMEWORK OF BAYESIAN APPROACH TO PURCHASE INCIDENCE
MODEL..................................................................................................................................................... 121
EXPLORING TEMPORAL ASSOCIATIVE CLASSIFIERS FOR BUSINESS ANALYTICS....... 122
APPLICATION OF ANALYTICAL PROCESS FRAMEWORK FOR OPTIMIZATION OF NEW
PRODUCT LAUNCHES IN CONSUMER PACKAGED GOODS AND RETAIL INDUSTRY ..... 124
THE PREDICTIVE ANALYTICS USING INNOVATIVE DATA MINING APPROACH ............ 125
ON ROUGH APPROXIMATIONS OF CLASSIFICATIONS, REPRESENTATION OF
KNOWLEDGE AND MULTIVALUED LOGIC.................................................................................. 126
SB-ROBUST ESTIMATION OF PARAMETERS OF CIRCULAR NORMAL DISTRIBUTION . 127
BAYESIAN ANALYSIS OF RANK DATA WITH COVARIATES ................................................... 128
SELECTING A STROKE RISK MODEL USING PARALLEL GENETIC ALGORITHM........... 129
LINKING PSYCHOLOGICAL EMPOWERMENT TO WORK-OUTCOMES .............................. 130
TO IDENTIFY THE EMPLOYABILITY SKILLS FOR MANAGERS THROUGH THE CONTENT
ANALYSIS OF THE SELECTED JOB ADVERTISEMENTS........................................................... 131
PERFORMANCE MEASUREMENT IN RELIEF CHAIN: AN INDIAN PERSPECTIVE ............ 132
MACHINE LEARNING APPROACH FOR PREDICTING QUALITY OF COTTON USING
SUPPORT VECTOR MACHINE........................................................................................................... 133
MACHINE LEARNING TECHNIQUES: APPROACH FOR MAPPING OF MHC CLASS
BINDING NONAMERS .......................................................................................................................... 134
THE CLICK CLICK AGREEMENTS –THE LEGAL PERSPECTIVES ......................................... 135

6
ICADABAI 2009 – Abstracts

Schedule –ICADABAI 2009

1st IIMA International Conference on Advanced Data Analysis,


Business Analytics and Intelligence

6-7, June 2009, Ahmedabad, India

Schedule

6th June
2009
8:00-9:00 Registration
9:00-9:45 Inauguration

Dr. Siddhartha Roy, Chief Key Note Address


9:45-10:30 Economist, TATA Group

11:00 - 13:00 Session 1-1


Geometric convergence of the Haar PX-
Vivekananda Roy, James P. DA algorithm for the Bayesian
Hobert (ic031) multivariate regression model with
Student t errors
Multi-treatment location-invariant optimal
Atanu Biswas, Saumen Mandal
response-adaptive designs for
(ic230)
continuous responses
Statistical Issues with Surrogate
Buddhananda Banerjee (ic232) Endpoints to Estimate the Difference of
Treatment Effects

Conditional inferences on large sample


K. Muralidharan (ic233) tests for intensity parameters in power
law process

11:00 - 13:00 Session 1-2


Stock Price and Macroeconomic
Rudra P Pradhan (ic143) Indicators in India: Evidence from
Causality and Cointegration Analysis
Kousik Guhathakurta, Santo Stock Price Return distribution: Non-
Bannerjee, Basabi Bhattacharya, Gaussian vs. Gaussian- an empirical
A. Roy Chowdhury (ic035) examination

7
ICADABAI 2009 – Abstracts
Shankar Prawesh, Martin Eling, Skew Ellipticality in Hedge Fund
Debasis Kundu, Luisa Tibiletti Returns: Which is the Best Fit
(ic025) Distribution?
A Case Study - To Prioritize the
Ashif Tadvi, Rakesh D. Raut Information Management Register (IMR)
(ic002) Issues uses ∆RWA (Risk Weighted
Assets) Approach
Closeness between Heuristic and
Dilip Roy, Goutam Mitra, Soma
Optimum Selections of Portfolio: An
Panja (ic023)
Empirical Analysis

11:00 - 13:00 Session 1-3


Vijay Chandru, Nimisha Gupta,
Ramesh Hariharan, Anand Decision Analytics:The Challenge of
Janakiraman, R. Prabhakar, Leveraging the Transduction of
Vamsi Veeramachaneni Processes
(ic218)
Rupesh Khare, Gauri Gupta Clustering of Inflammatory Skin Disease
(ic153) Patients Using Latent Class Analysis
Improving Maximum Margin Clustering
V. Vijaya Saradhi, Girish K.
Through Span of Support Vectors
Palshikar (ic060)
Minimization
Probabilistic identification of defects in
Anand Natarajan(ic038) an industrial process using level
crossing techniques
Arijit Laha (ic217) On building Information Warehouses

14:00-16:00 Session 2-1


A Generalized Framework For
Siddharth S. Singh, Sharad Borle, Estimating Customer Lifetime Value
Dipak C. Jain (ic202) When Customer Lifetimes Are Not
Observed
Siddharth S. Singh,P. B. A Segmentation Approach Using
Seetharaman, Dipak C. Jain Customer Lifetime Value: Insights for
(ic216) Customer Relationship Management
Cullen Habel, Larry Lockshin Double Jeopardy Diagnostics: A
(ic214) Diagnostic Tool for Market Dynamics
Compelling Signals: Competitive
Alka Varma Citrin, Matthew
Positioning Responses to Service Mark
Semadeni (ic212)
Filings
Pradip Sadarangani, Sridhar Using LISREL for Structural Equation
Parthasarathy (ic175) Sub-Models

14:00-16:00 Session 2-2


Covering based Rough set Approach to
B.K.Tripathy, V.M.Patro (ic128)
Uncertainty Management in Databases

8
ICADABAI 2009 – Abstracts
Real time spike detection from Micro
Nisseem S. Nabar, K. Rajgopal
Electrode Array Recordings using
(ic129)
Wavelet Denoising and Thresholding
Anjali Mohapatra, P.M.Mishra, Motif Finding using DNA Data
S.Padhy (ic152) Compression
An Approach of Summarization of Hindi
Swapnali Pote, L.G.Mallik (ic191)
text by Extraction
Formal Modeling of Digital Rights
Shefalika Ghosh Samaddar
Management for Sustainable
(ic015)
Development of e-Commerce

14:00-16:00 Session 2-3


Gyanendra Singh, Tamal Krishna Recovery Rate Modeling for Consumer
Kuila (ic146) Loan Portfolio
Satavisha Mukherjee, Sourabh The Proactive Pricing Model using
Datta (ic207) Forecasted Price Escalation Function
Jyoti Ramakrishnan,
Behavioural Segmentation Of Credit
Ramasubramanian Sundararajan,
Card Customers
Pameet Singh (ic201)
Santhanakrishnan R, Sivakumar Precision Targeting Models for
R, Harish Akella, Bimal Horo improving ROI of Direct Marketing
(ic099) Interventions
Customer Purchase Behaviour
Sunit Pahwa, Prasanna
Prediction Approach for Managing the
Janardhanam, Rajan
Customer Favourites List on a Grocery
Manickavasagam (ic070)
E-Commerce Portal
Product Inventory Management at BPCL
V. Ramachandran (ic249) & Effective and Efficient Distribution of
Products to Demand Centers

16:30-18:00 Session 3-1


Indian mutual Funds Performance:
Rajkumari Soni (ic148)
1999-2008
Household Meat Demand in India: A
Astha Agarwalla, Amir Bashir
Systems Approach Using Micro Level
Bazaz, Vinod Ahuja (ic098)
Data
The Lead-Lag Relationship between
Priyanka Singh, Brajesh Kumar
Nifty Spot and Nifty Futures: An Intraday
(ic237)
Analysis
Ramnik Arora, Utkarsh Upadhyay Can ETF Arbitrage be extended to
(ic119) Sector Trading?

16:30-18:00 Session 3-2


Development of emotional labour Scale
Niharika Gaan (ic177)
in Indian context
Women in Small Businesses: A Study of
Anil Kumar (ic062)
Entrepreneurial Issues

9
ICADABAI 2009 – Abstracts
Amitabh Deo Kodwani, Manisha Employees Perception of the Factors
K. (ic226) Influencing Training Effectiveness
One Shoe Doesn’t Fit All: An
Anurag Pant, Sanjay Mishra Investigation into the Processes that
(ic225) Lead to Success in Different Types of
Entrepreneurs

16:30-18:00 Session 3-3


M. J. Xavier, Anil Srinivasan, Arun Use of Analytics in Indian Enterprises –
Thamizhvanan (ic048) A Survey’
Using Data to Make Good Management
A. Hayter (ic227)
Decisions
Enhancing Business Decisions through
Abhinandan Jain, Uma V (ic205) Data Analytics and Use of GIS A
Business Application
R. Dholakia, Amir B. Bazaz,
Trends in Technical Progress in India,
Prasoon Agrawal, Astha Govil
1968 – 2003
(ic234)

18:30-20:00 Poster Sessions


18:30-20:00 P-I
Terrorist Attack & Changes in the Price
Gaurav Agrawal (ic026)
of the underlying of Indian Depositories
Ankit Goyal, Gunjan Malhotra Co-integration of US & Indian Stock
(ic107) Indexes
A common Financial Performance
A. Oliver Bright (ic163) Appraisal Model for Evaluating District
Central Cooperative Banks
Analysis of Rendering Techniques for
Vishal Dahiya(ic050)
the Perception of 3D shapes
MMeR: An algorithm for clustering
M S Prakash Ch., B. K. Tripathy
categorical data using Rough Set
(ic080)
Theory
Jyoti Verma, Sujata Verma Role of Forecasting in Decision Making
(ic108) Science
Mohit Salviya, Sunil Agrawal Bullwhip Diminution using control
(ic131) engineering
Goyal L.M., Mamta Mittal,
Kaushal V.P.Singh, Johari Rahul Automatic Detection of Clusters
(ic238)
Patita Paban Pradhan (ic168) Revenue Management

18:30-20:00 P-II
Shyamal Tanna, Sanjay Shah
Data Analysis using SAS in Retail Sector
(ic032)
Segmenting the Apparel Consumers in
Bikramjit Rishi(ic042)
the Organized Retail Market

10
ICADABAI 2009 – Abstracts
The Impact of Psychographics on the
Footwear Purchase of Youth:
V R Uma (ic135)
Implications for the manufacturers to
reposition their products.

Anita Sukhwal, Hamendra Kumar Factor analytical approach for site


Dangi (ic141) selection of retail outlet- A Case Study

A statistical analysis for Understanding


Fareed F Khoja, Surbhi Kangad Mobile Phone Usage Pattern among
(ic183) College-Goers in the district of Kachchh,
Gujarat.
Exploring the Factors Affecting the
Amol G, Aswin T, Basant P, Migration from Traditional Banking
Deepak S, Harish D (ic190) Channels to Alternate Banking Channels
(Internet Banking, ATM)
Weather Business in India – Potential &
Pratap Sikdar (ic082)
Challenges
Understanding of Happiness among
Mandeep Dhillon (ic095)
Indian Youth: A qualitative approach

18:30-20:00 P-III
Analytical Approach for Credit
Keerthi Kumar, M.Pratima (ic086)
Assessment of Microfinance Borrowers
Data Mining & Business Intelligence in
Sorabh Sarupria (ic174)
Healthcare
Business Intelligence in Customer
Chiranjibi Dipti Ranjan Panda
Relationship Management, A Synergy
(ic186)
for the Retail Banking Industry
Chetna Gupta, Abhishek Ranjan ‘Competitive Intelligence’ in Pricing
(ic193) Analytics
Sagar J Kadam, Biren Pandya Retail Analytics and ‘Lifestyle Needs’
(ic220) Segmentations
Revenue/Profit Management in Power
E. Nanda Kishore (ic016)
Stations by Merit Order Operation
How to handle Multiple Unsystematic
Shocks to a Time Series Forecasting
Anindo Chakraborty (ic185)
System - an application to Retail Sales
Forecasting
A Model using scientific method to cut
U.K.Panda, GBRK Prasad, A.R
down costs by efficient design of supply
Aryasri (ic200)
chain in Power Sector
Suresh Veluchamy, Andrew Clustering as a Business Intelligence
Cardno, Ashok K Singh (ic219) Tool

7th June
2009

9:00-11:00 Session 5-1

11
ICADABAI 2009 – Abstracts
Validating Service Convenience scale
Jayesh P Aagja, Toby Mammen,
and Profiling Customers- a study of
Amit Saraswat (ic140)
Indian retail context
A model for Classification and
Saroj Datta, Shivani Anand,
Prioritization of customer requirements
Sadhan K De (ic188)
in the value chain of Insurance industry
On the Folly of Rewarding Without
Measuring: A Case Study on
Ramendra Singh, Bhavin Shah
Performance Appraisal of Sales Officers
(ic244)
and Sales Managers in a
Pharmaceutical Company
Sanjeev Tripathi, P. K. Sinha The format or the store? How buyers
(ic242) make their choice.
Consumer Involvement for Durable and
Sapna Solanki (ic145) Non Durable Product: Key Indicators
and It’s Impact

9:00-11:00 Session 5-2


Goutam Dutta, Sankarshan Basu, Development of Utility Function for Life
Jose John (ic221) Insurance Buyers in the Indian Market
A RIDIT Approach to Evaluate the
Sreekumar, Ranjit Kumar Das,
Vendor Perception towards Bidding
Rama Krishna Padhi, S.S.
Process in a Vendor-Vendee
Mahapatra (ic081)
Relationship
Ashif Tadvi, Rakesh D. Raut, Linear Probabilistic Approach to Fleet
Prashant Singh (ic013) Size Optimization
Optimization of Manufacturing Lead
T Manikandan, Senthil Kumaran
Time in an Engine Valve Manufacturing
S (ic184)
Company Using ECRS Technique
Srinivas Prakhya, Jayaram Holla, Competitive Intelligence’ in Pricing
Shrikant Kolhar (ic240) Analytics

9:00-11:00 Session 5-3


Improving predictive power of Binary
Sandeep Das, (ic208) Response model using Multi Step
Logistic Approach
Nimisha Gupta, Vamsi
Veeramachaneni, O.M.V.
Sucharitha, Ramesh Hariharan, Net Opinion in a box
V. Ravichandar, Saroj Sridhar, T.
Balaji (ic115)
Mitul Shah, Jayalakshmi
Using Investigative Analytics & Market-
Subramanian, Suyashi
Mix Models for Business Rule & Strategy
Shrivastava, Kunal Krishnan
Formulation – A CPG Case Study
(ic134)
Ruhi Khanna, Atik Gupta,
Improve Dispatch Capacity of Central
Devarati Majumdar, Shubhra
Pharmacy
Verma (ic064)

12
ICADABAI 2009 – Abstracts
Application of Neural Networks in
Chetan Mahajan, Prakash G.
Statistical Control Charts for Process
Awate (ic103)
Quality Control

11:30 - 13:00 Session 6-1


Seshadev Sahoo, Prabina Rajib Measurement of Risk and IPO
(ic125) Underprice
Efficiency of Microfinance Institutions in
Debdatta Pal (ic239)
India
Measuring efficiency of Indian Rural
Gunjan M Sanjeev(ic187)
Banks using Data envelopment analysis
Ranking R&D institutions: A DEA study
Santanu Roy(ic228)
in the Indian context

11:30 - 13:00 Session 6-2


M. K. Ghosh, Vivek S. Borkar,
A New Approach to Credit Risk
Govindan Rangarajan (ic204)
Balaji Raman, Vladimir Volatility of Eurodollar futures and
Pozdnyakov (ic077) Gaussian HJM term structure models

Wavelet based volatility clustering


A.N. Sekar Iyengar (ic053)
estimation of foreign exchange rates

Modelling Multivariate GARCH Models


Tomoaki Nakatani (ic066)
with R: The ccgarch Package

11:30 - 13:00 Session 6-3


Abhinanda Sarkar (ic213) Wind energy: models and inference
Prakash Subramonian, Sandeep Field data analysis - a driver for
Baliga, Amarnath Subrahmanya business intelligence and proactive
(ic165) customer oriented approach
Simple Algorithms for Peak Detection in
Girish Keshav Palshikar (ic041)
Time-Series
Using the Decision Tree approach for
Rudra Sarkar(ic206) Segmentation analysis – an analytical
overview
Novel Business Application- Business
Sanjay Bhargava
Analytics

14:00-16:00 Session 7-1


Service Quality Evaluation on
Gouri Sankar Beriha, B.Patnaik, Occupational Health in Fishing Sector
S.S.Mahapatra (ic106) using Grey Relational Analysis to Likert
Scale Surveys
An Empirical study on perception of
Binod Kumar Singh(ic007)
consumer in insurance sector

13
ICADABAI 2009 – Abstracts
Two Component Customer Relationship
Hardeep Chahal(ic012) Management Model for Health Care
Services
An analytical study of the effect of
Uma V. P. Shrivastava (ic091) Advertisement on the consumers of
middle size towns
Sadia Samar Ali, R. K.
Empirical Framework of Bayesian
Bharadwaj, A. G. Jayakumari
Approach to Purchase Incidence Model
(ic222)

14:00-16:00 Session 7-2


O.P. Vyas, Ranjana Vyas, Vivek Exploring Temporal Associative
Ranga, Anne Gutschmidt (ic022) Classifiers for Business Analytics
Application of analytical process
Ganeshan Kannabiran, Derick framework for Optimization of new
Jose, Shriharsha Imrapur (ic150) product Launches in CPG & Retail
industry
Jyothi Pillai, Sunita Soni, O.P. The Predictive Analytics using
Vyas (ic118) Innovative Data Mining approach

On Rough Approximations of
D Mohanty, B.K.Tripathy, J.Ojha
Classifications, Representations of
(ic176)
Knowledge and Multivalued Logic

14:00-16:00 Session 7-3


K. C. Mahesh, Arnab K Laha SB-robust estimation of parameters for
(ic078) circular normal distribution
Somak Dutta, Arnab K Laha Bayesian Analysis of Rank Data with
(ic247) Covariates
Ritu Gupta, Siuli Mukhopadhyay Selecting a Stroke Risk Model Using
(ic055) Parallel Genetic Algorithm

16:30-17:30 Session 8-1


Anita Sarkar, Manjari Linking Psychological Empowerment to
Singh(ic243) Work-Outcomes

To Identify the Employability Skills for


Mandeep Dhillon(ic093) Managers through the content analysis
of the selected Job advertisements
Hamendra Dangi, A.S Narag, Performance Measurement in relief
Amit Bardhan (ic037) chain: An Indian perspective

16:30-17:30 Session 8-2

14
ICADABAI 2009 – Abstracts
Machine Learning Approach for
M. Selvanayaki, Vijaya MS (ic127) Predicting Quality of Cotton using
Support Vector Machine
V. S. Gomase, Yash Parekh, Machine learning techniques: approach
Subin Koshy, Siddhesh Lakhan, for mapping of MHC class binding
Archana Khade(ic209) nonamers
Rashmi Kumar Agrawal, Sanjeev The Click Click Agreements- legal
Prashar (ic059) perspectives

17:45-18:15 Closing Session

15
ICADABAI 2009 – Abstracts

Business Analytics – A Time for Introspection

Siddhartha Roy
Economic Advisor – Tata Group

At the outset, let me thank the organizers of the 1st IIMA International Conference on
Advanced Data Analysis, Business Analytics and Intelligence for inviting me to deliver
the keynote address; one feels both privileged and honoured.

These are turbulent times, for someone like me who is usually an indolent user of
established quantitative methods, the recent events provide a wakeup call. Discontinuity
in the behavioural reaction when income growth, consumption expenditure growth and
corporate earnings come hurtling down make us question the adequacy of
methodologies in predicting the future. Nothing seems incongruous, housing, durables,
FMCG off take collapse yet chocolate sales merrily climb up.

For nearly three decades one has been associated with the application of decision
analytics in business. Both as a practitioner and a user one has marveled at the
developments like the surge in computing power, the progress from simple multivariate
techniques to sophisticated data mining, the increasing use of Neural Nets and GA in
addressing financial, marketing and advertising response issues, extensive use of
simulation for scenario studies. All these are intellectually fascinating and this
conference provides a veritable feast of such papers.

Yet more often than not one has been dismayed by the incapacitating predictive failures
at the major turning points of the economy or asset markets. Our business cycle
research and understanding of lead and lag indicators have progressed a lot – yet we
are not quite there!

We did not predict the timing when we slipped into the current meltdown; nor do we
know when we’ll manage to get out of it. Someone said in retrospect everything is
obvious, in fact our grand children will seriously question the intellectual sanity of a set of
risk management experts who could not predict the last snivel of investment bankers in
2008.

For a moment one is not suggesting that when quantitative methods succeed in
predicting the outcome it is pure serendipity; nor is one saying that our failings put us on
par with a Voodoo practitioner, or an astrologer. However, there is a need for serious
introspection. In order to maintain our credibility it is better to avoid the temptation of
competing with an aphrodisiac or snake-oil seller.

Moving ahead, one possibly has to focus a lot more on the context and the behavioural
information captured in the data. For example, in the same product group, the
consumer’s sensitivity (elasticity) to a pricing change is very different when the economy
gets into disequilibrium. How consumer and investor confidence or the lack of it keep
reinforcing each other in the formation of demand cycles often escapes the attention of

16
ICADABAI 2009 – Abstracts
researcher focused on the micro issue. Then there could be asymmetry in micro
behaviour related to pricing and advertising as well as demand ratchets. Their linkage
with the changing macro context appears to be more visible in a meltdown phase. Cars,
durables, revenue per mobile unit even certain FMCG items seem to get affected.

The next question is how do we generalize a research result; there are significant cross
cultural and cross country differences in behavioral response functions. A forum like this
can certainly be helpful for exchanging research results and experience. However, this
also requires a contextual understanding of the socio-economic stages of development,
cultural alignments, etc. In other words, the quantitative specialists have to broad base
their thinking and welcome experts from other disciplines.

Many a times, lack of connectivity with other disciplines become quite brazen. We have
excellent simulation models for minimizing enterprise value at risk, but do we really
understand how risk and greed react with each other possibly nonlinearly. Similarly,
there are other questions, is past a good indicator of the future? How do we incorporate
structural discontinuity in our understanding? Retrofitting dummy variable may not be the
smartest solution.

In physical sciences the development of a cogent theory is at a fairly advanced stage;


but knowledge about a behavioural response function is still evolving. Advanced
statistical methods and AI can possibly help in this journey which seems to have just
begun. However, there is a need to ring-fence inductive logic and hypothesis building
from crass empiricism. In a meltdown phase, in volatile markets, the limits of our
knowledge tend to get seriously exposed.

Finally, there is a career related question, would you rather be an adviser to a gambler
rolling a six-faced die or even picking a card from a standard packet or join a day trader
facing new outcomes everyday. The meltdown has one good effect. It has exposed the
limits of our understanding in delineating the outcomes. May be this softer side of
business analytics calls for greater creativity and needs some focus.

17
ICADABAI 2009 – Abstracts

Geometric convergence of the Haar PX-DA algorithm for


the Bayesian multivariate regression model with Student
t errors

Vivekananda Roy James P. Hobert


Department of Statistics Department of Statistics
Iowa State University University of Florida

We consider Bayesian analysis of data from multivariate linear regression models whose
errors have a distribution that is a scale mixture of normals. Such models are used to
analyze data on financial returns, which are notoriously heavy-tailed. Let π denote the
intractable posterior density that results when this regression model is combined with the
standard non-informative prior on the unknown regression coefficients and scale matrix
of the errors. Roughly speaking, the posterior is proper if and only if n ≥ d + k, where n is
the sample size, d is the dimension of the response, and k is number of covariates. We
provide a method of making exact draws from π in the special case where n = d + k, and
we study Markov chain Monte Carlo (MCMC) algorithms that can be used to explore π
when n > d + k. In particular, we show how the Haar PX-DA technology of Hobert and
Marchev (2008) can be used to improve upon Liu’s (1996) data augmentation (DA)
algorithm. Indeed, the new algorithm that we introduce is theoretically superior to the DA
algorithm, yet equivalent to DA in terms of computational complexity. Moreover, we
analyze the convergence rates of these MCMC algorithms in the important special case
where the regression errors have a Student’s t distribution. We prove that, under
conditions on n, d, k, and the degrees of freedom of the t distribution, both algorithms
converge at a geometric rate. These convergence rate results are important from a
practical standpoint because geometric ergodicity guarantees the existence of central
limit theorems which are essential for the calculation of valid asymptotic standard errors
for MCMC based estimates.

Key words and phrases: Data augmentation algorithm, Drift condition, Markov chain,
Minorization condition, Monte Carlo, Robust multivariate regression

18
ICADABAI 2009 – Abstracts

Multi-treatment location-invariant optimal response-


adaptive designs for continuous responses

Atanu Biswas
Applied Statistics Unit, Indian Statistical Institute,
atanu@isical.ac.in

Saumen Mandal
Department of Statistics, University of Manitoba, Canada
saumen_mandal@umanitoba.ca

Optimal response-adaptive designs in phase III clinical trial, involving two or


more treatments at hand, is of growing interest. Optimal response-adaptive
designs were provided by Rosenberger et al. (2001) and Biswas and Mandal
(2004) [BM] for binary responses and continuous responses respectively. Zhang
and Rosenberger (2006) [ZR] provided another design for normal responses.
Biswas, Bhattacharya and Zhang (2007) [BBZ] pointed out some serious
drawback of the ZR design. Moreover, all the earlier works of BM, ZR and BBZ
suffer seriously if there is any common shift in location to observe the responses.
The present paper provides a location invariant design for that purpose, then
extends the present approach for more than two treatments. The proposed
methods are illustrated using some real data sets.

Key words: Constraints, Ethical allocation, Minimization, Truncated normal


distribution, Two parameter exponential family.

19
ICADABAI 2009 – Abstracts

Statistical Issues with Surrogate Endpoints to Estimate


the Difference of Treatment Effects

Buddhananda Banerjee
Indian Statistical Institute, Kolkata

buddhananda_r@isical.ac.in

Surrogate endpoint is defined as a measure or indicator of a biological process that is


obtained sooner, at less cost than a true endpoint of health outcome and is used to
make conclusions about the effect of intervention on the true endpoint. Instead of
assuming well-known Prentice criterion(1989) here we introduce a new assumption for
binary as well as continuous endpoints that only the true endpoint absorbs entire
information and so given the true endpoint surrogate is independent of or less
influenced by treatment. We have established Principle1 by Begg and Leung (2000) for
two-treatment binary response problem. We studied the nature of deviation when
surrogate endpoints along with few true endpoints are used to estimate the difference of
success probabilities between two treatments and addressed the problem where
estimation through surrogate endpoint is not consistent, rather underestimate it. The
surrogate end points having very low ``concordance'' probability with true endpoint is
also addressed here. For continuous end points we assume that the conditional
expectation of surrogate endpoint given the true one is same as the given value of true
endpoint and suggest an optimal use of both data to minimize the standard error in
estimation.

Key words: surrogate endpoint, true endpoint, Prentice criterion, Concordance


probability

20
ICADABAI 2009 – Abstracts

Conditional Inferences and Large Sample Tests for


Intensity Parameters in Power Law Process

K. Muralidharan
Department of Statistics,
The M. S. University of Baroda

lmv_murali@yahoo.com

Power law process (PLP) or Weibull process is the simplest point process model applied
to repairable systems and reliability growth situations. A repairable systems sometimes
called a maintained system is usually characterized by the intensity function λ (x) usually
a time dependant function. Therefore, a test for H 0 : λ ( x) = λ0 , a constant intensity
against an increasing or decreasing intensity is very important to assess the presence of
trend in the process. The test for trend is also essential for a maintained system working
under different environmental conditions as many often the repair policy is decided on
the basis of the type of trend present in the model. We investigate some conditional
inferences for constructing test statistics for testing trend and study their practical
importance from the repair policy point of view. Some numerical computations and
example are also studied.

Keywords: Power law process, Reliability growth, Repairable systems, Repair policy

21
ICADABAI 2009 – Abstracts

Stock Price and Macroeconomic Indicators in India:


Evidence from Causality and Cointegration Analysis
Rudra P. Pradhan
Vinod Gupta School of Management
Indian Institute of Technology Kharagpur
rudrap@vgsom.iitkgp.ernet.in

The behaviour of stock price has been a recurrent topic in the financial jargon. Stock
price is time varying and depends upon its past information, market news and
various macroeconomic factors. The paper, however, aims at examining the impact
of macroeconomic factors on the stock price by using Bombay Stock Exchange as a
case study. The cointegration and vector Error Correction Model (VECM) has been
used to ascertain both short run and long run relationships. Monthly data over the
period 1994-2005, especially during the globalization era of 1990s, has been taken
for the empirical investigation. The findings reveal that stock price and
macroeconomic variables (such as stock price, index of industrial production, money
supply, inflation and exchange rate) are integrated of order one and an existence of
long run equilibrium relationship between them. The VECM finally confirms that the
possibility of both short run and long run dynamics between the stock price and
macro economic variables. The policy implication of this study is that macroeconomic
variables are considered as the policy variable to forecast the stock price in the
economy.
Keywords: Stock Price, Macroeconomics variables, VECM

22
ICADABAI 2009 – Abstracts

Stock Price Return distribution: Non-Gaussian vs.


Gaussian- an empirical examination
Kousik Guhathakurta1, Santo Bannerjee2, Basabi Bhattacharya3 and

A. Roy Chowdhury4
1,2
Army Institute of Management, Kolkata, 3Department of Economics, Jadavpur
University, 4High Energy Physics Division, Department of Physics, Jadavpur
University

1 2 3 4
kousikg@gmail.com; santoban@gmail.com; basabi54@gmail.com; arc.roy@gmail.com

It has long been challenged that the distributions of empirical returns do not follow
the lognormal distribution upon which many celebrated results of finance are based
including the Black Scholes Option Pricing model. There have been many alternative
approaches to it. To our knowledge, none result in manageable closed form
solutions, which is a useful result of the Black and Scholes approach. However,
Borland (2002) succeed in obtaining closed form solutions for European options.
Their approach is based on a new class of stochastic processes, recently developed
within the very active field of Tsallis non-extensive thermo statistics, which allow for
statistical feedback as a model of the underlying stock returns. Motivated by this, we
simulate two distinct time series based on initial data from NIFTY daily close values.
One is based on the classical Gaussian model where stock price follows Geometric
Brownian Motion. The other is based on the Non-Gaussian model based on Tsallis
distribution as proposed by Borland. Using techniques of Non-linear dynamics we
examine the underlying dynamic characteristics of both the simulated time series and
compare them with the characteristics of actual data. Our findings give a definite
edge to the Non Gaussian Model over the Gaussian one.

Keywords: Stock Price Movement, Brownian Motion, Tsallis Distribution, Non Linear
Analysis

23
ICADABAI 2009 – Abstracts

Skew-Ellipticality in Hedge Fund Returns: Which is the


Best fit Distribution?

Martin Eling1, Debasis Kundu2, Luisa Tibiletti3, Shankar Prawesh4


1
University of St. Gallen, Switzerland : martin.eling@unisg.ch
2
Indian Institute of Technology, Kanpur : kundu@iitk.ac.in
3
University of Torino, Italy : luisa.tibiletti@unito.it
4
Indian Institute of Technology, Kanpur : sprawesh@iitk.ac.in

To study the nature of financial products it is necessary to model the empirical return
data with a proper statistical distribution. In view of deviations from the normal
distribution and the heavy tails of empirical densities, studying statistical distributions of
financial returns have become imperative. Due to the use of options and leverage,
hedge funds are especially prone to non-normality. The aim of this present paper is to
model heavy tailed hedge fund returns with the skew elliptical distributions. Specifically,
we focus on the skew-normal, skew-t and skew-logistic.

Keywords: Heavy tail distribution, Skew-Elliptical distribution, Goodness of fit.

24
ICADABAI 2009 – Abstracts

A Case Study - To Prioritize the Information Management


Register (IMR) Issues uses ∆RWA (Risk Weighted
Assets) Approach

Ashif J. Tadvi1 and Rakesh D. Raut2


NITIE, Mumbai,
1tadvi.ashif@gmail.com, 2rakeshraut09@gmail.com

Strategic Information Management (SIM) department processes to store financial and


non-financial data for regulatory and business performance reporting and providing
extracted and crucial information to Top Management for making Strategic Decisions
related to Wholesale Banking business. The case is mainly based on the Credit Data
Warehouse from where the extracted data is used by Basel Capital & Reporting Solution
(BCRS) to calculate the Risk Weighted Assets (RWA). To assure the Data Quality in the
Data Ware House Basel Capital Reporting System (BCRS) Metrics is used which
contain 94 attributes based on 3 principles:-Accuracy, Completeness, and
Appropriateness. Complying with BASELII-AIRB (advanced internal rating-based
approach) Standards, for Capital Adequacy Ratio, Bank has to keep aside a certain % of
its Risk Weighted Assets (RWA) as capital. So the need is to calculate the RWA as
accurately as possible. But due to some Data Quality Issues in Ware House which are
also logged in Information Management Register(IMR), BCRS metrics is using
assumptions to calculate the RWA using RWA Calculation Engine due to which
RWA is not calculated accurately. IMR has more than 190 issues. To deal with each
and every issue simultaneously is almost impossible due to following reasons: - Low
available Resources, Large Number (greater than 190) of issues, Cost Constraint, Time
Constraint.

Key words: - Strategic Information Management; Information Management Register;


Basel Capital & Reporting Solution (BCRS); BASELII-AIRB

25
ICADABAI 2009 – Abstracts

Closeness between Heuristic and Optimum Selections


of Portfolio: An Empirical Analysis

Dilip Roy1, Goutam Mitra2 and Soma Panja3

1
Centre for Management Studies, University of Burdwan, West Bengal:
dr.diliproy@gmail.com
2
Department of Business Administration, University of Burdwan, West Bengal:
goutamnbp2160@gmail.com
3
Management Institute of Durgapur, Durgapur, West Bengal:
soma_panja1980@rediffmail.com

Selection of the optimum portfolio is difficult task to the investors as choice of optimum
weight is very difficult. In this paper, we have selected heuristic portfolios based on the
investors’ propensity to take risk. For this purpose, two extreme situations have been
chosen – risk taker and risk aversive investors. To construct heuristic portfolios, we have
calculated portfolio weights heuristically and tried to see whether there is any closeness
exists between the optimum portfolio constructed on the basis of traditional method and
portfolio constructed on the basis of heuristic method. For demonstration purpose, we
have taken Nifty data of 2006 and 2007 and selected a portfolio of 10 securities. After
detailed discussion, we have obtained that closeness exists between the optimum
portfolio selected traditionally and portfolio selected heuristically.

Key words: portfolio return, portfolio risk, optimum portfolio, heuristic portfolio, City Block
Distance and Euclidian Distance.

26
ICADABAI 2009 – Abstracts

Decision Analytics: The Challenge of Leveraging the


Transduction of Processes

Vijay Chandru1, Nimisha Gupta, Ramesh Hariharan,


Anand Janakiraman, R. Prabhakar, and Vamsi Veeramachaneni

Strand Analytics, Bengaluru, India


1
Hon. Professor, National Institute for Advanced Studies, Bengaluru:
chandru@strandls.com

The time has come for India to leverage information technology to accelerate its internal
development on several fronts. One important aspect of improved efficiency in the
economy, that is within immediate grasp for rapid implementation, is to move towards
empirically based decision support by leveraging the databases that are emerging from
digitization or “transduction” of complex processes in the economy. The decision
sciences have given us a plethora of modeling paradigms. However, advanced training
is required for skilled use of these methodologies and the scale at which analytics needs
to be effectively applied leaves us with a massive challenge.

We need a semi-automated and high content software platform that assembles and
homogenizes data pulled from huge repositories of raw, partial and fragmented data,
aids the “semi-skilled” user in performing deep analysis with ease in interaction and
helps him/her discover preliminary hypotheses that can be handed off to the specialists
for deeper modeling and decision support. There is an insufficient pool of trained
analysts to cope with the scale and complexity of data spewing out. It is this need that
has been called “De-Skilled Decision Analytics” and for which a solution is described in
this paper along with a number of case studies.

Keywords: Decision support, De-skilling, Software platform, Training

27
ICADABAI 2009 – Abstracts

Clustering of Inflammatory Skin Disease Patients Using


Latent Class Analysis

Rupesh K Khare1 and Gauri Gupta2


1
Hewitt Associates, Gurgaon, India: rupesh_khare@hotmail.com
2
MICA, Ahmedabad, India: gauri8@mica.ac.in

This paper highlights the concept, advantages and application of Latent Class Analysis
(LCA). The first section presents a preview of LCA, a clustering technique, to underline
the method’s suitability for various research and analytics work. The second section
highlights the relevance of LCA in light of the limitations encountered by other frequently
used clustering techniques such as K Means and hierarchical clustering. Subsequently,
the third section underscores the application of LCA by presenting a real life project
executed by the authors while they were working with marketRx, a consulting company
in pharmaceutical analytics.

Key Words: Clustering, Latent Class Analysis, Latent Gold, Consumer Behavior and
Attitudes

28
ICADABAI 2009 – Abstracts

Improving Maximum Margin Clustering Through Span of


Support Vectors Minimization

V. Vijaya Saradhi, Girish K. Palshikar


TRDDC, Tata Consultancy Services, Pune, Maharashtra
E-mail: v.saradhi@tcs.com, gk.palshikar@tcs.com

Maximum margin based clustering has shown to be a promising method. The central
idea in the maximum margin clustering (MMC) method is to assign labels (belonging to
the set {-1, +1}) to all the N data points such that the resulting label assignment has
maximum margin. This convex integer programming problem is cast as a semi definite
programming (SDP) formulation by introducing a few relaxations Linli, X., James, N.,
Bryce L., and Dale S., (2005). Experiments show the superiority of MMC over spectral
kernel clustering method, and other clustering methods.

In the present work, we aim at improving further the MMC formulation. Our idea is to
assign labels to all the N data points such that margin is maximized and the
generalization error bound on the support vector machine (SVM) (given in terms of span
of support vectors) is minimized simultaneously. Minimizing the span of support vectors
is formulated as SDP formulation and is combined with the original MMC formulation
which aims at maximizing the margin. The resulting formulation is shown to perform
better compared to original MMC on UCI data sets.

Key words: Kernel methods, maximum margin, span of support vectors, clustering,
unsupervised learning

29
ICADABAI 2009 – Abstracts

Probabilistic Identification of Defects in an Industrial


Process using Level Crossing Techniques

Anand Natarajan
Caterpillar India Pvt. Ltd.,
Engineering Design Center
Chennai, India

natarajan_anand@cat.com

The focus of this paper is to delineate the importance of minimizing variations in process
rate as opposed to attempting to minimize process variation alone. Normal distributions
are used extensively in industrial settings to derive the probability of defects occurring in
processes, using sampled data and applying the central limit theorem of statistics. In this
paper, a different approach is taken, whereby defects are described by the number of
up-crossings of a prescribed level, set as the specification limit for the output of a
process. The number of level crossings is modeled as a Poisson process, over a
constant or time varying barrier and an exceedance probability is computed. Modeling
defect occurrences using a level crossing approach is shown to be inclusive of
deterministic events and tracking of time dependent factors that impact the processes.
The paper expands on the principle of level crossings to emphasize that the
achievement of 6-Sigma quality levels should be focused on minimizing the variation of
the process rate and not the process by itself, as done conventionally. An algorithm
based on linear algebra to connect the process rate with the process is developed to
enable direct integration into minimization procedures, which provides optimal statistical
process control.

Key Words: Level crossings, Poisson processes, mean crossing rate, 6-Sigma,
probability of exceedance

30
ICADABAI 2009 – Abstracts

On building Information Warehouses

Arijit Laha
Center for Knowledge-driven Information Systems
Software Engineering and Technology Labs
Infosys Technologies Ltd.
Hyderabad

Arijit_laha@infosys.com

One of the most important goals of information management (IM) is supporting the
knowledge workers in performing their works. In this paper we examine issues of
relevance, linkage and provenance of information, as accessed and used by the
knowledge workers. These are usually not adequately addressed in most of the IT based
solutions for IM. Here we propose a non-conventional approach for building information
systems for supporting the knowledge workers which addresses these issues. The
approach leads to the ideas of building Information Warehouses (IW) and Knowledge
work Support Systems (KwSS). Such systems can open up potential for building
innovative applications of significant impact, including those capable of helping
organizations in implementing processes for double-loop learning.

Keywords: information system, knowledge management, relevance, linkage,


provenance, knowledge work support systems

31
ICADABAI 2009 – Abstracts

A Generalized Framework for Estimating Customer


Lifetime Value When Customer Lifetimes Are Not
Observed

Siddharth S. Singh1, Sharad Borle2, and Dipak C. Jain3


1, 2
Jesse H. Jones Graduate School of Management, Rice University, Texas
3
J. L. Kellogg School of Management, Northwestern University, Illinois

E-mail: sssingh@rice.edu, sborle@rice.edu, d-jain@kellogg.northwestern.edu

Measuring customer lifetime value (CLV) in contexts where customer defections are not
observed, i.e. noncontractual contexts, has been very challenging for firms. This paper
proposes a flexible Markov Chain Monte Carlo (MCMC) based data augmentation
framework for forecasting lifetimes and estimating customer lifetime value (CLV) in such
contexts. The framework can be used to estimate many different types of CLV models—
both existing and new.

Models proposed so far for estimating CLV in noncontractual contexts have built-in
stringent assumptions with respect to the underlying customer lifetime and purchase
behavior. For example, two existing state-of-the-art models for lifetime value estimation
in a noncontractual context are the Pareto/NBD and the BG/NBD models. Both of these
models are based on fixed underlying assumptions about drivers of CLV that cannot be
changed even in situations where the firm believes that these assumptions are violated.
The proposed simulation framework—not being a model but an estimation framework—
allows the user to use any of the commonly available statistical distributions for the
drivers of CLV, and thus the multitude of models that can be estimated using the
proposed framework (the Pareto/NBD and the BG/NBD models included) is limited only
by the availability of statistical distributions. In addition, the proposed framework allows
users to incorporate covariates and correlations across all the drivers of CLV in
estimating lifetime values of customers.

Key Words: Customer Lifetime Value; Forecasting; Simulation; Data Augmentation;


MCMC.

32
ICADABAI 2009 – Abstracts

A Segmentation Approach Using Customer Lifetime


Value: Insights for Customer Relationship Management

Siddharth S. Singh1, P. B. Seetharaman2, Dipak C. Jain3

1, 2
Jesse H. Jones Graduate School of Management, Rice University, Texas
3
J. L. Kellogg School of Management, Northwestern University, Illinois

E-mail: sssingh@rice.edu, seethu@rice.edu, d-jain@kellogg.northwestern.edu

A valuable metric used in Customer Relationship Management (CRM) is Customer


Lifetime Value (CLV). We propose a latent class methodology to recover CLV segments
in a unique contractual context where customer lifetimes are observed. To our
knowledge, this is the first paper that uses the popular latent class segmentation
methodology to segment customers based on their lifetime value by jointly considering
the key drivers of CLV. Using customer-level data from a membership-based direct
marketing company, we estimate a statistical model of three simultaneous behavioral
drivers of CLV, i.e., (1) customer lifetime, (2) customer inter-purchase time, and (2)
dollar spending, while allowing the model parameters to be heterogeneous across
customers along observed and unobserved dimensions. The estimated segment-specific
model parameters are used to obtain segment-specific CLV estimates for the firm. We
uncover three CLV segments. We find that longer (shorter) lifetime customers have
lower (higher) CLV which is contrary to popular wisdom regarding contractual contexts.
Further, longer (shorter) lifetime customers also have longer (shorter) inter-purchase
times with the company. Lastly, the average dollar spending per purchase is a non-
monotonic function of customer lifetimes. Finally, we compare our results to those
obtained from another segmentation method used in the extant literature.

Key-Words: Customer Lifetime Value (CLV), Customer Relationship Management


(CRM), Latent Class Segmentation.

33
ICADABAI 2009 – Abstracts

Double Jeopardy Diagnostics: A Tool to Understand


Market Dynamics

Cullen Habel
The University of Adelaide, South Australia
cullen.habel@adelaide.edu.au

Larry Lockshin
University of South Australia, South Australia
larry.lockshin@unisa.edu.au

This paper extends a well established normative model of market behaviour to the
analysis of dynamics in repeat purchase (FMCG) markets. Whilst the NBD-Dirichlet is a
stochastic market model most commonly associated with stationary markets, we argue
that it may also be used to develop sequential snapshots of a market that changes over
time.

A broad range of observed changes in a market environment can be parsimoniously


represented by the changes in just three category parameters of the NBD-Dirichlet.
Many of the changes in brand performance are also represented by changes in each
brand specific (alpha) parameter from the NBD-Dirichlet.

We also harness the double jeopardy (DJ) line as a method of describing these
dynamics. A DJ line is an x-y plot of a penetration measure against average purchase
frequency for brands in a market. The position and shape of a DJ line can be expected
to change as market conditions change.

In drawing the theoretical double jeopardy lines for consecutive periods we use an NBD-
Dirichlet based functional form. This allows for the meanings of the NBD-Dirichlet
parameters to be given a visual dimension and used to develop predictions for brand
growth. The NBD-Dirichlet parameters can be interpreted as parameters of category
acceptance (K), weight of category purchasing (A), category competition (S) and each
brand’s strength - αj for each brand j.

From the analysis in this paper, we establish that there are three types of brand growth –
balanced, expansive and reductive – and that these correlate to different patterns in
parameter changes. We conclude that the infinite array of nonstationary market
behaviours can be given some structure through a sound understanding of NBD-
Dirichlet parameters, viewed through the lens of the DJ line.

Keywords: Dynamics, NBD-Dirichlet, Double Jeopardy, Stochastic Models, DJ Line

34
ICADABAI 2009 – Abstracts

Compelling Signals: Competitive Positioning Responses


to Service Mark Filings

Alka Varma Citrin


Georgia Institute of Technology,College of Management
Georgia Institute of Technology, Atlanta
alka.citrin@mgt,gatech.edu

Matthew Semadeni
Department of Management & Entrepreneurship, Kelley School of Business,
Indiana University, Bloomington
semadeni@indiana.edu

This research examines the characteristics of firm signals that influence whether
competitor firms move toward or away from a predecessor firm’s market position.
Specifically, we examine if competing firms in the professional service industry follow (or
stay away from) the market position defined by a market predecessor’s trademark linked
to a service, referred to as a service mark. We predict differential effects for the market-
positioning responses of follower firms depending on the interaction of two factors
signaled in a firm’s service mark application: (1) existing firm capability to address a
market space opportunity and (2) firm commitment to pursue that opportunity. Using a
novel combination of text and network analysis, we examine all service mark filings by
the top 50 professional service firms from 1989 to 1999. Results indicate that when
existing capabilities are signaled as being low, firms attract greater competitive overlap
compared to when signaled capabilities are high. The level of commitment signaled by
the predecessor firm moderates this relationship.

Key Words: market signaling, competitive positioning, service marks, time series
analysis

35
ICADABAI 2009 – Abstracts

Using LISREL for Structural Equation Sub-Models

Pradip Sadarangani
IIMB, Bangalore, Karnataka
pradip@iimb.ernet.in

Sridhar Parthasarathy
IIMB, Bangalore, Karnataka,
Sridharp03@iimb.ernet.in

LISREL is a package that is used to perform analysis of covariance structures, also


known as Structural Equation Modelling. There are other programs that also perform
this type of analyses, of which the best known is EQS.

Today however, LISREL is no longer confined to SEM. The latest LISREL for Windows
includes other modules for applications like data manipulations, basic statistical
analyses, hierarchical linear and non-linear modelling and generalized linear modelling.
We address the concerns of a beginner to LISREL and provide normative guidelines for
modelling various multivariate techniques like Exploratory Factor Analysis, Confirmatory
Factor Analysis, multiple regression, ANOVA/ MANOVA and multiple group analysis.

Keywords - Linear Structural Relations, Structural Equations Model, Causal Models

36
ICADABAI 2009 – Abstracts

Covering Based Rough Set Approach to Uncertainty


Management in Databases

B.K.Tripathy V.M.Patro
School of Computing Sciences Biju Pattanaik Computer
VIT University, Vellore Centre, Berhampur
Tamilnadu University,Berhampur,
Orissa

tripathybk@rediffmail.com vmpatro@gmail.com

Relational databases were extended by Beaubouef and Petry to introduce rough


relational databases, fuzzy rough relational databases and intuitionistic rough relational
databases. The introduction of these concepts into the realm of databases, enhanced
capabilities of databases by allowing for the management of uncertainty in them. Rough
set, due to its versatility can be integrated into an underlying database model, relational
or object oriented, and also used in the design and querying of databases. Covering
based rough sets provide generality as well as better modeling power to basic rough
sets. Also, this new model unifies many other extensions of the basic rough set model.
In this article, we introduce the concept of covering based rough relational databases
and define basic operations on them. Besides comparison with previous approaches, it
is our objective to illustrate the usefulness and versatility of covering based rough sets
for uncertainty management in databases.

Key words: CB-rough sets, CB-fuzzy rough sets, CB-intuitionistic fuzzy rough sets, CB-
rough relational operators, CB-intuitionistic fuzzy rough relational operators.

37
ICADABAI 2009 – Abstracts

Real Time Spike Detection from Micro Electrode Array


Recordings using Wavelet Denoising and Thresholding.

Nisseem S. Nabar* and K. Rajgopal


Department of Electrical Engineering,
Indian Institute of Science, Bangalore.

E-mail: 8nabarns@iimahd.ernet.in, kasi@ee.iisc.ernet.in

Brain Machine Interfaces can be used to restore functions lost through injury or
disease. Micro Electrode Arrays are an invasive method of acquiring neural
signals which can then be used as control signals. The first requirement for such
a use is to extract time-stamped spike trains from the MEA recordings. For use in
BMI applications this extraction needs to be real time and computationally less
expensive.

We propose an algorithm based on wavelet denoising and thresholding of the


denoised signal. Wavelets provide localization in both time and frequency
domains and the ability to analyze signals at different resolutions. Appropriate
thresholding of wavelet coefficients followed by reconstruction provides a less
noisy version of the input signal.

The algorithm proposed is tested on simulated data whose parameters have


been decided from actual MEA recordings. It is found to be real-time and has
variable memory requirements which make it ideal for BMI applications.
Detection accuracy of 90% with false positives of less than 5% are achieved as
compared to detection accuracy of 80% with false positives of 10% shown in
literature (Kim and Kim, 2003).

Key words: Time series analysis, signal processing, analysis of biological data.

* Nisseem Nabar is currently pursuing a Post Graduate Diploma in Business


Management at the Indian Institute of Management, Ahmedabad.

38
ICADABAI 2009 – Abstracts

Motif Finding Using DNA Data Compression

Anjali Mohapatra P.M.Mishra S.Padhy


IIIT, Bhubaneswar EIC Electricity, Orissa Utkal University

anjali.mohapatra@iiit-bh.in

The problem of finding motifs in biological sequences has been studied


extensively due to its paramount importance. Researchers have taken many different
approaches and the progress made in this area is very encouraging. As we move to
higher organisms with more complex genomes, more sensitive methods are needed.
Despite extensive studies over the past decade, this problem is far from being
satisfactorily solved. DNA specific compression algorithms exploit the repetitiveness of
bases in DNA sequences. However, compression of DNA sequences is recognized as a
tough task and needs much more improvement. In this paper we exploit a compression
method based on the fact that the variation of sequences in the same organism is small
and finite. We use and extend Generalized Suffix Tree (GST) based compression
approach with a proposed scoring method for motif finding problems.
Key Words: DNA, compression, suffix tree, motif, GST.

39
ICADABAI 2009 – Abstracts

An Approach of Summarization of Hindi text by


Extraction

Swapnali Pote1, L.G.Mallik

G. H. Raisoni College of Engineering, Nagpur, Maharashtra

1srkurhade@yahoo.com

This paper proposes a method to generate the summary of Hindi textual data by
extracting the most relevant sentences from the text. The method is based on the
combination of Statistical & Linguistic approach. The growth of the Internet has lead to
the ample of the digitally stored information, so it must be filtered and extracted in order
to avoid drowning in it. With the growth in Indian economy, very soon the broadband will
reach most parts of India and then the non-English (Hindi) speaking user base will
outgrow the English-speaking user base in India. Therefore in the coming years the text
summarizer in Hindi becomes essential. The summary generated from the text will help
readers to learn new facts without reading the whole text.

Keywords: Text summarization, Text extraction, Sentence weight, Hindi

40
ICADABAI 2009 – Abstracts

Formal Modeling of Digital Rights Management for


Sustainable Development of e-Commerce

Shefalika Ghosh Samaddar


Motilal Nehru National Institute of Technology
Allahabad

shefalika@mnnit.ac.in, shefalika99@yahoo.com, shefalika99@gmail.com

Structured approaches for intellectual property rights management system provide a


prescription and guidelines for the process of development; typically requirements are
written using natural language without having a formal foundation. Our approach is to
provide the advantages of structured approaches and formal methods by capturing
requirements using structured approaches and its subsequent transformation into a
formal description. This creates a sample Z specification from an Object Role-Rank
Model (ORRM) schema. An object role-rank model schema is the end product of a
transformation procedure from ORRM to Z by choosing suitable types and variables for
a Z specification and predicates that express all the constraints required to model
ORRM. The representation in Z preserves ORRM’s concepts in a way that aids
validation. An ORRM schema successfully differentiates between object oriented
concepts and role-centric dynamic objects.

The approach is illustrated by using the modeling the management of copyright in the
Internet, known as Digital Rights Management (DRM) - a mechanism to enforce access
control over a resource without considering its location. The DRM framework, that lies
behind and the whole value chain from creators to end-users based on different roles
assumed at different point of time, is achieved transforming the core concepts for
creations, rights, actions. The set of actions operating on content using and assuming
various roles at different juncture of application domain are the building blocks of the
complex copyright domain ensuring interoperability. Rights and action patterns are
modeled as role-ranks of actions, and concrete actions are modeled as instances of
these role-ranks. If some right or license requires an action, it is required to check for
role-rank it assumes and dynamic instance classification through roles they assume. The
resulting copyright model framework is flexible enough to model the moral exploitation of
content.

Keywords: Formal Model, Object Role-Rank Model, IPR Management, Digital Rights
Management (DRM), Z Specification of DRM, Object Z.

41
ICADABAI 2009 – Abstracts

Recovery Rate Modeling for Consumer Loan Portfolio

Tamal Krishna Kuila1 and Gyanendra Singh2


ICICI Bank
Email: 1taml.kuila@icicibank.com, 2gyanendra.singh@icicibank.com , singh.gyan@gmail.com

Basel Accord II allows bank to estimate their risk determinants under the IRB
approach. The present study tries to provide an empirical framework for estimating Loss
Given Default (LGD) for a retail consumer loan portfolio. LGD is modeled using Hurdle
regression model and family of censored regression model. Results indicate ability of the
model to estimate LGD with bimodal distribution. Key determinants which affect LGD are
total outstanding as proportion of loan size along with loan size and historical payment
performance of the consumer.

Key words: Credit risk, Hurdle model, Recovery rate, LGD

42
ICADABAI 2009 – Abstracts

The Proactive Pricing Model- Using Forecasted Price


Escalation Function

Satavisha Mukherjee & Sourabh Datta

Analytics,
Genpact India
Kolkata

E-mail: satavisha.mukherjee@genpact.com / sourabh.datta@genpact.com

Pricing is always a dynamic decision in a market economy. In a not so competitive


market, where the price can be set or adjusted within a limit, the pricing decisions are
taken not only to protect margin but also to set a strategic position in the market.
Proactive pricing moves are always expected to reap better results than following others’
price. In such situations with multiple and fluctuating cost heads, maintaining a steady
margin is a challenge in absence of hedging. Again any ad-hoc change in price to
account for increase in cost can be a threat to market share from other competitors.

This paper proposes a method for proactive price adjustment that addresses, both, the
increase in cost and retaining the strategic share in the market and most importantly with
a targeted profit. The time series forecasting technique is used to predict the input cost
increase. After the price function is designed with the forecasted cost, the concept of
elasticity is used to capture market sensitivity of the pricing moves. Then the final
adjustment to align the pricing with business profit target gives us the price function.

Key Words: Producer Price Index (PPI), Time Series Modeling, Autoregressive
Integrated Moving Average (ARIMA), Arc Elasticity, Sales Segment, Contribution Margin
(CM).

43
ICADABAI 2009 – Abstracts

Behavioural Segmentation of Credit Card Customers

Jyoti Ramakrishnan Ramasubramanian Sundararajan and Pameet Singh


Computing & Decision Sciences Lab
GE Global Research
John F. Welch Technology Centre
Bangalore

Email: {jyoti.ramakrishnan, ramasubramanian.sundararajan, pameet.singh}@ge.com

Many financial companies consider segmenting their customers based on the way these
customers transact with the company, to help them design customized marketing
programs for each segment which would help improve customer satisfaction &
eventually increase customer profitability. In this paper we describe a way of segmenting
credit card customers based on their transactional behaviour with the card company.
The segmentation solution has been obtained using a combination of factor analysis, k-
means clustering & transition analysis models. Factor analysis was used to obtain key
factors from the available set of transactional variables. The factors were found to
encapsulate the following characteristics: monetary value, utilization, spending
frequency, speed of activation, preference for POS/ATM transactions and propensity to
“revolve” (i.e., pay interest on borrowings). The k-means clustering algorithm was used
on these factor scores to arrive at the segments. The customers from the study were
segmented into 6 groups based on their behavior. Segment dynamics were analyzed
and used to come up with recommendations for differentiable marketing treatment to
each of these 6 segments to enhance profitability.

Keywords: CRM, Data Analysis in Banking and Financial Services, Cluster Analysis

44
ICADABAI 2009 – Abstracts

Precision Targeting Models for improving ROI of Direct


Marketing Interventions

Santhanakrishnan R1, Sivakumar R2, Harish Akella3, Bimal Horo4

Infosys Technologies Limited

E-mail: 1santhanakrishnan_r@infosys.com , 2Sivakumar_r05@infosys.com,


3harish_akella@infosys.com, 4Bimal_horo@infosys.com

Activities across the Customer Relationship Management cycle involve evaluating


customers / prospects on multiple behavioral dimensions, or on shades of a single
dimension. Being able to prioritize shoppers that would not only respond to promotional
mailers but deliver higher incremental sales through repeat purchases, over those that
just walk into a store once; or mailing prospects that not only sign-up for a credit card but
activate and spend, over those that just sign-up, are examples of situations that test the
ability to differentiate between seemingly relevant behavior and truly relevant behavior.
Conventional approach used by CRM Analytics practitioners to address such situations
involves using multiple models to score the customer / prospect base. In testing the
conventional multi-model approach and a lesser known alternative that uses a single
model with carefully chosen weighting schemes, we find that the alternative approach
has the potential to address a wide range of business objectives, virtually eliminate the
need to build & maintain multiple models and to have complex selection strategies
based on them, and also maintain the business impact & ease of implementation.

Keywords: Customer Relationship Management (CRM) Analytics, Statistical Scoring


Models, Response/Activation & Incremental Sales Models, Direct Marketing Campaigns

45
ICADABAI 2009 – Abstracts

Customer Purchase Behaviour Prediction Approach for


Managing the Customer Favourites List on a Grocery E-
Commerce Portal

Sunit Pahwa
Prasanna Janardhanam
Rajan Manickavasagam

The approach discussed in this paper uses a machine learning technique to capture
the buying patterns of the products bought by a customer over a period of time. It then
predicts and generates a list of items which are most likely to be bought by the customer
on his next visit to the grocery e-commerce portal of the retailer. Since the prediction is
done at a product level and have just two options: (a) Customer will purchase the
product, (b) Customer will not purchase the product. This problem is more like
classifying a product for a customer in either a likely purchase or a likely non purchase.
Being a typical classification problem, we used the Naïve Bayes Classifier to generate a
likelihood score of purchase for each item previously purchased by a customer (and for
each customer). This likelihood score is then used to rank all the items and generate the
favourites list for each customer. This approach was accurate enough to predict about
two-third of the baskets of about three-fourth of the total customers. The immediate
result of this approach is enhanced online experience for the customers who feel their
needs are better understood by the online retailer.

Keywords: Data Mining, Predictive Analytics, Bayesian Methods, Data Analysis in


Retailing

46
ICADABAI 2009 – Abstracts

Product Inventory Management at BPCL & Effective and


Efficient Distribution of Products to Demand Centers

V. Ramachandran
BPCL

The paper highlights the process adopted by BPCL in managing its system
inventories of petrol and diesel at about 101 locations (22 terminals, 9 tap off
points and 70 depots/demand points)

The inventory management process becomes more challenging especially when


product supplies to 101 demand centers are mainly catered to by 3 own
refineries, 10 PSU refineries and 2 private refineries, 16 pipeline tap off points by
means of different modes of transport like pipelines, rail, road and ship.

47
ICADABAI 2009 – Abstracts

Indian Mutual Funds Performance: 1999-2008

Rajkumari Soni

Department of Accounting and Financial Management,


Faculty of Commerce,
The Maharaja Sayajirao University of Baroda,
Vadodara, Gujarat

E-mail: rajkumari_soni2001@yahoo.com

Mutual fund is the prominent investment institution today. Mutual fund performance is
one of the most frequently studied topics in investment area in many countries. The
reason for this popularity is availability of data and the importance of mutual funds as
vehicles for investment by for both individuals and institutions. Since mutual funds have
become popular, there is a growing importance of research by institutions and
academician. The present study examines the past performance of mutual funds as a
criterion for investors’ future choices. The study started the analysis by the fund
attributes influenced the return. In this paper, hypotheses are based on the fund
characteristics i.e. beta, standard deviation, fund size, NAV, fund age, management
tenure and expense ratio. The study covers 47 equity mutual fund schemes (with equity
option) for which the data is available for the entire study period i.e. from Jan 1999 to
Dec 2008. The results indicate that the hypothesized relationship between mutual funds
performance and the explanatory variables are generally upheld. The study provides a
comprehensive examination of recent Indian mutual funds performance by analyzing the
fund returns and fund attributes affecting the funds performance and an effort to link
performance to funds specific characteristics.

Key Words: Mutual funds Performance, Correlation, Regression analysis, Mutual Funds
characteristics

48
ICADABAI 2009 – Abstracts

Household Meat Demand in India – A Systems Approach


Using Micro Level Data
Amir Bashir Bazaz1, Astha Agarwalla2 and Vinod Ahuja3
Indian Institute of Management, Ahmedabad, Gujarat

E-mail: 1amirb@iimahd.ernet.in, 2asthag@iimahd.ernet.in, 3ahuja@iimahd.ernet.in

This study presents the results of estimation of a linear approximate almost ideal
demand system for Indian meat products, using cross-sectional household level data
collected by National Sample Statistics Organization in India as part of the 60th survey in
2004.

The paper uses a censored regression method for the system of equations to analyze
the consumption patterns for meat products. The Heckman’s two-step procedure was
used to estimate the demand system. In the first step, Inverse Mills Ratio(IMR) was
estimated using a Probit model. In the second step, IMR was included in the LA/AIDS
model as an independent variable, while estimating the system of equations using the
Seemingly Unrelated Regression model.

The objective of this study is to provide econometric estimates of price and expenditure
elasticity estimates for meat demand in India. Some other demographic variables
influencing the demand for meat products are identified, as Sector (Rural/Urban),
Religion, Land Ownership and Size of the Household. The results revealed that the
demand for Beef, Pork and Fish is elastic while that for egg and chicken is inelastic. The
cross-price elasticity estimates indicated that mutton and beef are substitutes to chicken,
whereas, egg and fish are substitutes to each-other.

Keywords: Price elasticity, Expenditure elasticity, Meat demand, Censored regression,


Consumption pattern

49
ICADABAI 2009 – Abstracts

The Lead-Lag Relationship between Nifty Spot and Nifty


Futures: An Intraday Analysis

Priyanka Singh1 and Brajesh Kumar2


Indian Institute of Management Ahmedabad
Ahmedabad, India

Email: 1 priyakas@iimahd.ernet.in, 2 brajeshk@iimahd.ernet.in

This paper focuses on the price discovery in the Indian stock market by taking the case
of Nifty Spot and Futures using five minute prices. The data considered is of two periods:
bull and bear market. Vector Error Correction Model is used to examine the lead lag
relationship between Nifty spot and futures return. It is found that futures return leads
spot return by as much as ten minutes in bull and thirty minutes in bear market. Spot
return leads futures return by five minute in bull market and thirty minute in recent bear
market. Vector Autoregressive model is used for finding the price discovery in spot and
futures volatilities. Futures market lead spot market by as much as twenty minutes in bull
market and twenty five in bear market. In conclusion, there is no significant role that Nifty
futures is playing in price discovery.

Keywords: Granger Causality, Impulse Response, Weak Exogeneity

50
ICADABAI 2009 – Abstracts

Can ETF Arbitrage be Extended to Sector Trading?

Ramnik Arora, Utkarsh Upadhyay


Indian Institute of Technology, Kanpur

We design and deploy a trading strategy that mirrors the Exchange Traded Fund (ETF)
arbitrage technique for sector trading. Artificial Neural Networks (ANNs) are used to
capture pricing relationships within a sector using intra-day trade data. The fair price of a
target security is learnt by the ANN. Significant deviations of the true price from the
computed price (ANN predicted price) are exploited. To facilitate arbitrage, output
function of the trained ANN is locally linearly approximated. The strategy has been
backtested on intra-day data from September 2005. Results are very promising, with a
high percentage of profitable trades. With low average trade durations and ease of
computation, this strategy is well suited for algorithmic trading systems.

Keywords: ETF Arbitrage; Neural Networks; Sector Trading; Statistical Arbitrage

51
ICADABAI 2009 – Abstracts

Development of emotional labour Scale in Indian context

Niharika Gaan
IMIS, Bhubaneshwar.
E-mail: niharikagaan@yahoo.com, n_gaan@imis.ac.in

This study describes the development and validation of emotional labour scale (ELS) as
tested on samples of 491 respondents from B-schools of India. The ELS is a 12-item
self- reporting questionnaire that measures 4 facets of emotional labour in the work
place, which includes variety in emotional display, deep acting, surface acting and
emotional regulation. Estimates of internal consistency for the subscales ranged from
.67 to .89. Confirmatory factor analysis results provided support to the 4 facets of
unidimensional subscales emotional labour scale, which contradicts the six facets of
existing emotional labour scale. Evidence was also provided for convergent and
discriminant validity.

Key Words: Deep acting, Surface acting, Automatic regulation, Variety in emotional
display, Emotional Labour

52
ICADABAI 2009 – Abstracts

Women in Small Businesses: A Study of Entrepreneurial


Issues

Anil Kumar
Haryana School of Business,
Guru Jambheshwar University of Science & Technology,
Hisar, Haryana

E-mail: anil_k6559@yahoo.co.uk

This paper examines entrepreneurial issues of women in small businesses by taking


a sample of 120 respondents from the state of Haryana. 26 statements have been
administered to women involved in small businesses. Opinion of women on these
issues has been taken on five degree likert scale. The factor analytical model has
clubbed different entrepreneurial issues of women into nine factors. Motivation
related issues can be tackled by imparting training in the management of small
enterprises. Problem of handling of finance and marketing of product will also be
solved during the training process. Requirements of separate support agencies can
be tackled by creating special cells under the charge of women officials within
different departments. Infrastructure assisting women in business should be further
strengthened. Policy relating to entrepreneurship development should be made
more liberal for existing and potential women entrepreneurs. There is a need to
redesign the course curriculum to make it more self- employment oriented.

Key words: Entrepreneurship, Motivation, Training, Finance, Marketing

53
ICADABAI 2009 – Abstracts

Employees Perception of the Factors Influencing


Training Effectiveness

Amitabh Deo Kodwani Manisha K.


Institute of Management Technology, Ghaziabad. Consultant

Email: deoamitabh@gmail.com

Indian Public Sector Enterprises are passing through massive changes due to rapid
technological change on one hand and competition from the private sector (especially
MNCs) on the other hand. In order to compete in such a liberalized and globalized
economy, PSEs are required to improve their organisational effectiveness. These
changes necessitate the need of training and development in PSEs for the optimal use
of manpower, which will benefit both employees and organization.

The need for systematic training and development has also increased because of the
rapid technological change, competition that creates new kind of jobs and eliminates old
ones. New jobs require some sort of special skills which may be developed in existing
work force by providing them necessary training, otherwise employee’s train themselves
by trial and error or by observing others if no formal training programme exist in the
organization. In this way the employees will take much longer time to learn new skills.
Systematic training and development not only increases the skill levels but also
increases the versatility and adaptability of employees.

With the changing time there is also need for reexamining the existing system of training
and development and to look at the training and development policies and practices from
new perspective. Organizations need to rethink and modify these training and
development policies and practices to get maximum benefit, which is essential for
improving organisational effectiveness. Success of training not only depends upon
instructor, content, input, training method, but also depends upon the perception of the
participants/employees about the training, training awareness, motivation to learn and
transfer, learning efforts, training participation & involvement, training transfer climate,
and training evaluation. In order to make it more effective, perception of the employees
towards training and development need to be made positive. This can be done by
involving them in training and development activities, by creating good learning
environment and by helping and encouraging them to learn and then practice those
learning’s on the job.

Keywords: Learning environment, Organisational effectiveness, Training and


development.

54
ICADABAI 2009 – Abstracts

One Shoe Doesn’t Fit All: An Investigation into the


Processes that Lead to Success in Different Types of
Entrepreneurs

Anurag Pant
School of Business and Economics
Indiana University South Bend
Email: anurag@iusb.edu

Sanjay Mishra
School of Business
University of Kansas
Email: smishra@ku.edu

Some entrepreneurs, who lack the cognitive ability to elaborate on issues, can still be
successful. Such ‘naive’ entrepreneurs have a lower need for cognition, a lower recall,
and a higher feeling of knowing about a topic than sophisticated entrepreneurs.
Consequently, we expect them to be lower risk takers than sophisticated entrepreneurs.
On the other hand, naïve entrepreneurs induce higher empathetic support from key
business associates and employees. This lets them get more “resources” than
sophisticated ones. A better understanding of naïve entrepreneurs could help to reduce
new venture failure rates. This paper uses content analysis to measure the constructs
and structural equation modeling to show their interrelations.

Keywords: Need for Cognition, Risk-taking, Empathy, Successful Entrepreneurs.

55
ICADABAI 2009 – Abstracts

Use of Analytics in Indian Enterprises: A Survey

M.J.Xavier
Kotler-Srinivasan Center for Research in Marketing
Great Lakes Institute of Management, Chennai
xavier@greatlakes.edu.in

Anil Srinivasan
Kotler-Srinivasan Center for Research in Marketing

Arun Thamizhvanan
Great Lakes Institute of Management, Chennai

In 2007, India accounted for one-third of the total $17-billion global market for analytics.
However, the rate of adoption of analytics for decision making and enhancing the
customer experience has been slow on the uptake. While the term ‘analytics’ has found
universal usage in almost all business platforms, what it refers to and the specific
contexts in which it ought to be used is still ambiguous among senior managers in the
Indian corporate milieu.

To uncover the antecedents of these observations, at least in part, we conducted a


survey among 84 senior managers across domains, company profiles and regions
across the country. We find that an effective understanding of analytics as a decision
craft tool grows with time and experience for most individuals, and the prevalence of
more heuristic-based decisionmaking is still in vogue. Further, only companies of a
certain size (turnover of Rs 500 Crore or more) make a concerted effort to maintain and
update data necessary for efficacious use of analytics, and place this high on their
priorities. Further, many ambiguities regarding the definition and scope of analytics were
observed.

The paper discusses these findings in detail and concludes with a brief discussion on the
steps ahead.

Keywords: Rate of Adoption, Corporate Milieu, Ambiguities

56
ICADABAI 2009 – Abstracts

Using Data to Make Good Management Decisions

Anthony Hayter,
Department of Statistics and Operations Technology
University of Denver
Denver, Colorado, USA

Some thoughts and perspectives are provided on quantitative courses in the business
school curriculum and the challenges of motivating and equipping managers with
appropriate statistical techniques and skills. Experiences gained from teaching in the
business school environment and from consulting with companies will be presented.
Some case studies from the USA and Asia will be provided that illustrate how data
analysis has been employed to better understand business situations, and to provide the
basis for better decision making. Are businesses today making efficient use of the data
they have available, and would they be surprised by their own data?
Generally, the curriculum of a quantitative course in a business school would address
the following goals.
• Develop an understanding of the basic concepts of probability and statistics, and
how they relate to managerial type problems and decision making.

• Develop experience performing and interpreting standard data analysis


methodologies.

• Obtain familiarity with a statistical software package.

However, a crucial aspect of this education is the motivation of students of the value of
this material to their businesses. Some Golden Rules will be discussed which provide
the foundation for this motivation. Additionally, the pitfalls and dangers of a lack of
appreciation of the complexities of probability theory are presented.
Case studies are a wonderful tool for motivating students. Case studies from various
countries and industrial sectors are presented that illustrate how data analysis
techniques can be applied to real problems and how they can have an impact on the
company’s financial bottom line.

Keywords: Case studies, Decision making, Quantitative courses in business


schools,

57
ICADABAI 2009 – Abstracts

Enhancing Business Decisions through Data Analytics


and Use of GIS
A Business Application

Uma V
Datafix Technologies Pvt. Ltd., Mumbai

Abhinandan Jain,
Indian Institute of Management, Ahmedabad

This paper presents the application of data analytics and spatial analysis (GIS) in one
circle of a leading Indian telecom service organisation (Bharti Airtel: BA). BA was facing
the problem of increasing bad debts and collection costs in one of the circles. BA turned
to a data analytics organisation (Datafix Technologies Pvt Ltd1 ) for resolving the issues
by using a data based approach. The available data included filled up customer
application forms and company’s collection points. The customer data, like name,
address, etc., was of poor quality. The methodology included (i) identifying relevant
variables, (ii) splitting/ exploding the data fields and deriving new variables (iii) deriving
linkages to identify unique customers and their relationships, (iv) using spatial analysis to
study and link customers and collection centers. Paper uses, and shares the rationale of
choosing the, specific tools in the Indian context.

The application helped BA in consolidating billing, reducing billing costs, identifying


spatial pockets of higher defaults, identifying corporate clients for building relationships,
and possibility of optimizing the location of collection centers. The paper shares efforts to
generate emotional touch points from text data in Indian context.

Key words: data parsing, data enrichment, data linkages, spatial analysis

58
ICADABAI 2009 – Abstracts

Trends in Technical Progress in India, 1968 to 2003

Ravindra H. Dholakia1, Astha Agarwalla2, Amir Bashir Bazaz3 & Prasoon Agarwal4
Indian Institute of Management, Ahmedabad, Gujarat.

E-mail: 1rdholkia@iimahd.ernet.in, 2asthag@iimahd.ernet.in, 3amirb@iimahd.ernet.in,


4prasoon@iimahd.ernet.in

The paper is based on the Input – Output (I-O) tables for the Indian economy for the
eight years covering a period of 36 years from 1968-69 to 2003-04. The technical
progress (TP) in the context of the I-O tables is based on the concept of the production
function defining the relationship between gross output and material inputs as well as
value added. Moreover, it is also at the disaggregated sectoral level. The paper
empirically verifies the following hypotheses: (i) Indian economy experienced substantial
TP continuously through out the period; (ii) The rate of TP during the inward looking and
outward looking growth strategy phases of the Indian economy remained the same; (iii)
The rate of TP at the disaggregated sectoral level is almost uniform over time; and (iv)
Liberalization and globalization have not impacted sectoral rates of TP differentially.

In order to measure the rate of TP, the available eight national I-O tables in India are first
made compatible for the number, scope and definitions of sectors as well as for prices
by converting them at constant 1993-94 prices. Simple measures are also used for
converting changes in technical coefficients into the aggregate rate of TP for a sector
and for the economy.

Keywords: Input-Output (I-O), Technical Progress, Technical coefficients, Indian


economy, Liberalization, Globalization

59
ICADABAI 2009 – Abstracts

Terrorist Attack & Changes in the Price of the underlying


of Indian Depositories

Gaurav Agrawal
Atal Bihari Vajpayee - Indian Institute of Information Technology & Management
(ABV - IIITM), Gwalior

This research paper empirically examine the impact on the stock returns of the
underlying domestic shares of the Indian companies’ listed ADRs / GDRs issues in
NYSE, NASDAQ and LSE of the terrorist attack at London’s Public Transport System
on 7th July 2005. An event study was conducted on the stock returns of the underlying
domestic shares of the 08 Indian ADRs listed in NYSE/NASDAQ and 07 GDRs listed in
LSE. For the study 07th July 2005 was considered the event day. The Abnormal Returns
(ARs), Average Abnormal Returns (AARs) and Cumulative Average Abnormal Returns
(CAARs) were computed based on the Market model using daily closing price data of
the underlying companies and S&P CNX Nifty. The behavior of these variables was
examined for 15 days before and 15 days after the event day. The study found that the
impact of the announcement on the event day was insignificant for the all baskets of
underlying domestic shares of Indian ADRs/GDRs listed in NYSE/NASDAQ/LSE.
However during the event window of 31 days (i.e. -15 to +15) AARs and CAARs were
negative on most of the days for all the baskets of ADRs / GDRs, that clearly indicated
that announcements possess important information which leads changes in the
underlying stock prices. Therefore study concluded that the terrorist attack hold
important information to the baskets of underlying domestic shares of Indian ADRs /
GDRs. Further the trend of CAARs that declined continuously even several days after
the event day indicated slow assimilation of information to the stock prices that
concluded that Indian stock market was inefficient in the semi strong form of Efficient
Market Hypothesis (EMH) during the study period.

Key Words: Terrorist Attack, Event Study, Average Abnormal Returns (AARs), Efficient
Market Hypothesis (EMH), ADRs/GDRs

60
ICADABAI 2009 – Abstracts

Co-integration of US & Indian Stock Indexes

Gunjan Malhotra & Ankit Goyal


Institute of Management Technology , Ghaziabad

One of the most profound phenomenon prevailing in the present financial markets is the
increase in international financial transactions across the world. With the advent of
liberalization, globalization and advances in information technology, this process has
gained much momentum resulting in a progressive integration of the emerging markets
with the developed markets. In line with the global trend, the present paper empirically
investigates the long-run equilibrium relationship between the US and Indian stock
market indexes. Econometric tests like test of cointegration, Augmented Dickey-Fuller
test for unit roots, and Granger causality test have been employed in the analysis. We
conclude that BSE Sensex is highly influenced by Nasdaq Composite Index which
reinforces the long run relationship between the two stock markets.

Keywords: Interrelationship between Indian and US stock markets, cointegration, unit


root test, Granger causality test.

61
ICADABAI 2009 – Abstracts

A common Financial Performance Appraisal Model for


Evaluating District Central Cooperative Banks

A. Oliver Bright,
Dept. of MBA, Infant Jesus College of Engineering,
Tuticorin, Tamil Nadu.
Email: aobright67@yahoo.co.in

In India there are 31 State Cooperative Banks (SCBs) and 372 District Cooperative
Banks (DCCBs) functioning under respective SCB and 97,224 Primary Agricultural
Cooperative Banks (PACBs) are functioning under respective DCCB. The DCCBs are
formed in each district with a prime objective of uplifting the economically weaker
sections, poor agriculturists and to foster savings among them. The Government of India
is allotting a huge amount in this sector every year. Among the DCCBs in India 262
DCCBs are operating in profit. Most of them are able to earn only a marginal amount of
profit. Only 85 DCCBs struggled to get profit and could declare dividend. Many DCCBs
are sustaining loss year after year. Every year the performance of the DCCBs are
assessed by awarding marks on 18 selected parameters with a maximum of 800 marks.
The State Cooperative Banks have circulated this format to the DCCBs for its evaluation.

The current performances appraisal system is incomplete and it does not cover all the
essential factors for assessment. Moreover there is no standard format which is
universally applicable for all the DCCDs in India to evaluate. The Economic Value
Addition (EVA) to the development of people of the respective region such as fostering
of savings, generation of direct and indirect employment, self employment, economic
growth of economically weaker sections and the poor agriculturists are not included in it.
The profit earned and the dividend declared are not given due consideration. The
collection, reduction of NPA and deposit mobilization are not given due weightage
Considering all the factors an intensive study is made and a fair model for assessing the
performance of the DCCBs in India is developed. This model will help for assessing the
performance of the DCCBs. This system can be used as a tool for evaluating the relative
performance level of DCCBs also. This “BRIGHT” model can be used for evaluating the
performance and for assessing the relative position of DCCBs which may be further
extended to countries having similar cooperative banking or credit system.

Key words: Non-performing Assets (NPA), Economic Value Addition (EVA)

62
ICADABAI 2009 – Abstracts

Analysis of Rendering Techniques for the Perception of


3D shapes

Vishal Dahiya
IBMR, Ahmedabad

Human vision system starts with just the shower of photons that hit the retina of each
eye and proceeds to construct a 2D contours and 3D shape by consulting various
sources of information such as shading, texture, motion, occlusion, binocular disparity. In
this process it uses many law which is based on reflectance, geometry, projection and
lighting. An image is perceptually realistic if a viewer of the image synthesizes a mental
image similar to that synthesized by the virtual viewer. The human visual process
synthesizes many different signals into an internal mental image. Normally the input to
this process is the light coming from the various surfaces in the scene. 3D shape
visualization is usually done on the 2D screens. The algorithm and techniques involved
in the process of generating 2D image from a 3D world coordinate system is basically
known as Rendering. Lighting Models, Shading techniques, the presence of textures and
the properties of 3D shape material provide very different rendering quality. Other
important factors that influence the visual quality of a 3D model are Line Of Detail (LOD).
The perception of different LODs strongly depends on the selected rendering technique
In this research paper, I will explore the factors that influence the perception of a
rendered image and also the analysis of rendering technique that are used in this area
and their limitation.

Key Words- Rendering, perception, techniques, 3D shapes.

63
ICADABAI 2009 – Abstracts

MMeR: An algorithm for clustering categorical data


using Rough Set Theory

B.K.Tripathy and M S Prakash Kumar.Ch


School of Computing Sciences
VIT University, Vellore, Tamilnadu

E-mail: tripathybk@rediffmail.com, manohar_vit@yahoo.co.in

So far, in the field of clustering various techniques have been introduced in dealing with
categorical data. These algorithms are not able to deal with uncertainty and stability. Our
algorithm is using Rough Set Theory (RST), which handles uncertainty from its very
basic definition. Though an algorithm named MMR has already introduced RST it needs
a few consistency measures to improve the results, in which we worked on. The areas,
selection of the splitting attribute and selection of a cluster for re-clustering are improved.
In case of re-clustering the cluster with highest average distance is chosen for re-
clustering rather than the cluster with highest number of objects, which is done by
introducing a criterion for finding the distance between any two objects. This is basically
derived from Hamming distance. The results thus obtained are found out by calculating
the purity of the clusters. For example, clustering the ZOO data set using MMeR resulted
in a purity of 96.5% and 78% is the highest ever achieved till now, by MMR. This
algorithm can still be developed by introducing Fuzzy properties. Rough-Fuzzy
properties are already defined.

Keywords: Data Mining, Clustering, Rough Set Theory, MMR, Hamming Distance

64
ICADABAI 2009 – Abstracts

Role of Forecasting in Decision Making Science

Jyoti Verma & Sujata Verma

ISB&M Pune

Forecasting comprises of the techniques that predict the future on the basis of
probability. This paper is about forecasting for a new product development project; of
Hyundai Construction Equipment India Private Ltd (HCEIPL), which ultimately
grounded the completion of the feasibility study of the company setting up the plant
for the production of excavators. HCEIPL is the wholly owned subsidiary of Hyundai
Heavy Industries (HHI) (Korea). The main objectives for this study are, First to
explore the best forecasting technique for predicting the total cost of HCEIPL and
secondly to check the financial feasibility of HCEIPL over the period of five years.
Hypothesis of creating a role model has been selected on the basis of cross-
sectional analysis, cash flow analysis, and ratio analysis of the competitors in India
with HHI (Construction Division).L&T-Komatsu is the role model on the basis of
similar financial risk, growth, cash flow characteristics. Estimation of L&T-Komatsu is
done for the next five years after that the ratio of forecasted value of expenditure to
forecasted value of sales of L&T for the 5 year and with an approximation of 1 or 2 %
in the ratio estimate for HCEIPL from 2008 to 2012 has been taken for further
calculation. Computation of Net Present Value (NPV) of the project is required to
check the feasibility of the project. Quadratic regression is the best forecasting
technique for deterministic model. Validation of the model is done by residual
analysis (accuracy measures being MAPE, MSD, MAD), & F- statistic (deciding upon
the model).Quadratic regression gives us the exact value. As future prediction is not
always exact value. So there is the need of confidence level. Double exponential can
create the prediction interval. Since in double exponential MSD was coming higher
than quadratic regression. Differencing with lag one was required. Double
Exponential Smoothing is the best forecasting technique on the basis of residual
analysis, f-statistic, and t-statistic for prediction interval. Decision-making is an
essential part of the management process. The Net Present value of the project of
230 crores is positive. NPV comes to around 6.38 crores. Project should be
accepted.

Keywords: Net Present value, double exponential, Quadratic regression, role model.

65
ICADABAI 2009 – Abstracts

Bullwhip Diminution using control engineering

Sunil Agrawal & Mohit Salviya


PDPM IIITDM Jabalpur

Email: sa@iiitdm.in, mohit.salviya@gmail.com

The bullwhip effect is a well known instability phenomenon in supply chains, related to
volatility amplification of demand profiles in the upper nodes of the chain. This paper
proposes a novel control engineering approach for analyzing the bullwhip effect using an
exponential smoothening forecasting model with a simple type 0, 1 and 2 systems
representing constant, linear and quadratic demand input respectively. Analyses of
bullwhip effect with different demand trends are done using both the statistical control
engineering approaches. Using control engineering approach, various techniques are
studied for calculating explicitly the associated noise form under the bullwhip effect.
Therefore an analysis for minimizing the error for different smoothening factors under
constant, linear and quadratic demand input in the studied inventory policies on the
bullwhip effect can be studied. Stability analysis of the output signals is done by using
Bode plot and root locus plot. Also, the representation of bullwhip in terms of noise
transmission and its reduction via matching the bandwidth, with the “Control Engineering
Perspective” is done and results are analyzed.

Keywords: Bullwhip Effect; Exponential Smoothening; Inventory Policy; Forecasting;


Ordering Decisions.

66
ICADABAI 2009 – Abstracts

Automatic Detection of Clusters

Goyal L.M. Johari Rahul


Apeejay College of Engineering Sohna Gurgaon GGSIPU USIT Delhi

Mamta Mittal Kaushal V.P.Singh


CDED TU Patiala CDED TU Patiala

Email: lalitgoyal78@rediffmail.com

Knowledge discovery is primary goal of data warehousing. Data Mining is one of the
steps in knowledge discovery process It is a technique of extracting meaningful
information from large databases or data warehouse. Mining can be done by different
techniques. Clustering is one of the techniques, which partitions the database in various
groups. Its use in data mining is growing very fast. There are different clustering
methods but the major focus here is on partitioning based clustering which requires prior
information from the outside world of the number of clusters into which the database is to
be divided. But today there is requirement of such algorithms that can generate different
clusters automatically. The objective here is to propose a new partitioning based
clustering algorithm that can generate clusters automatically without any previous
knowledge on the user side.

Keywords--- KDD, Data mining, partitioning based clustering

67
ICADABAI 2009 – Abstracts

Revenue Management

Patita Paban Pradhan


NIMT Ghaziabad, Delhi

Patitapaban2009@gmail.com

Revenue Management (RM) is a relatively new field currently receiving much attention of
researchers and practitioners and essentially means setting and adjusting prices on a
tactical level in order to maximize profit.

Clearly, traditional well-known pricing techniques are closely related, however, the new
twist is that RM avails itself of sophisticated demand forecasting and pricing that is
based on research in many areas such as management science, economics,
mathematics and others.

Due the availability of a vast amount of data through customer relationship management
systems that can be used to calibrate the models, these techniques had a tremendous
impact on the airline industry where RM first was applied, and subsequently in other
industries such as car rentals, cargo or hotels, .

As part of ongoing changes in the industry, companies throughout the entire hospitality
spectrum are placing a strong emphasis on implementing major operational changes.
Beyond recognizing that meaningful cost reductions must be achieved without
compromising safety, capacity and service levels, they are also looking at reducing costs
by increasing flexibility and improving asset utilization through an RM strategy. In doing
so, they continue to reassess their true core.

Keywords: RM, RM & Pricing, RM in Hotel Industry, RM vs. MIS, Problem in Future
Research

68
ICADABAI 2009 – Abstracts

Data Analysis using SAS in Retail Sector

Shyamal Tanna Sanjay Shah

Globsyn Business School S V Insti. Comp. Studies & Research


Ahmedabad Kadi
tannashyamal@gmail.com prof_smshah@yahoo.com

This research paper is mainly focused on the analysis of the available data in the area of
the retail sector. In this experiment, a store wants to examine its customer base and to
understand which of its products tend to be purchased together. It has chosen to
conduct a market basket analysis of a sample of its customer base. After this analysis
the store can put those items that the customers buy, together. Then there will be more
chances that customers buy both of those products.

Association rules are used for the market basket analysis. Process flow for this
experiment involves; firstly, selecting the input data source node, this data set contains
the grocery products purchased by 1,001 customers and from this, twenty possible items
are represented in this data set. The next phase of the experiment concentrates on
defining the role of the different variables such as customer-id, product-nm etc, from
here the associations nodes are configured and then lastly we will attempt to run the
model. It is hoped that the resultant data will be in percentage of support that will provide
conclusive evidence that certain products should be put together.

Key Words: Association node, Market basket analysis, Support

69
ICADABAI 2009 – Abstracts

Segmenting the Apparel Consumers in the Organized


Retail Market

Bikramjit Rishi
Institute of Management Technology (IMT)
Raj Nagar, Ghaziabad – U.P.

Email: brishi@imt.edu

Retailing in India has emerged as one of the most dynamic and fast paced industries
with several players entering the market. Apparel Retailing in India is gradually inching
its way to becoming major contributor in the retailing growth in India. The whole concept
of shopping in apparel category has undergone change in terms of format and consumer
buying behavior, ushering in a revolution in shopping. This study makes an effort to
understand the Indian apparel buyer so that the Indian retailers can devise strategies to
fulfill the needs of the buyers in a better way. The study highlights the four segments i.e
Modern & Professional, Orthodox, Incautious and perfectionist. The study further entices
the researchers in this field to go for more in depth analysis for the better understanding
of the Indian apparel buyer.

Keywords : Apparel buying, Indian consumer, Cluster analysis.

70
ICADABAI 2009 – Abstracts

The Impact of Psychographics on the Footwear


Purchase of Youth: Implications for the manufacturers
to reposition their products.

V.R.UMA
Christ University, Bangalore

This paper focuses on the influence of psychographics on the footwear purchase of the
Indian youth. For the purpose of the study 401 males and 401 females between the age
group of 19 to 26 from Bangalore were considered. Cluster analysis revealed that 62%
of the male population comprised of the Fashionables, 15% were Economicals and 23%
were Independents. In the case of females 6 clusters were formed wherein 6% were
Traditionals, 38% were Economicals, 12% were Independents, 3% were Health
conscious, 38% were Fashionables and 3% were Economic Fashionables. Separate
analysis was done for casuals and formal footwear. The major attributes that were used
to measure the preferences include - Footwear should go with the colour of the dress,
Standard Colours, Warranty, Durability, Price, Quality, Variety, Elegance, Bargain
preferred than fixed price, periodicity of shopping, Convenient location, Amenities,
Ambience of the store and courteousness of salesmen. These attributes were listed by
the respondents. Data regarding Income, such as monthly income and spendable
income was also collected. The study revealed that people belonging to different
lifestyles have different preferences irrespective of the income class they were in.

Key Words: Cluster analysis, Fashionables, Economical, Formal, Casual

71
ICADABAI 2009 – Abstracts

Factor analytical approach for site selection of retail


outlet - A Case Study

Anita Sukhwal Hamendra Kumar Dangi


Pacific Institute of Technology, Udaipur Faculty of management, Delhi
Rajasthan
shivparadise@gmail.com hkdangi@fms.edu

The growing affluence of India’s consuming class, emergence of new breed of retail
entrepreneurs & flood of imported products in the grocery space, has driven the current
retail boom.
Against this backdrop the purpose of the present paper is to –
• Evaluate the factors affecting the choice of Retail outlets by consumers.
• Existing literature
• Seek opinion of Retail Customers
• Provide useful suggestions for Improvement
• Use of statistical tools of Factor and Regression Analysis .
Research & Methodology
1.1 Area under the study - Mumbai, Delhi
1.2 Research Design - Exploratory
Age range -20-30
Data Sources - Primary, Secondary
1.3 Sampling frame - Convenience
Sample units -120.
2.0 Analysis of Results
1.Respondents profile, Market in Retail sectors, Products.
2.Review of literature
3.Analysis of 20 variables
4.Regression Analysis
5.Recommendations
3.0 Findings
1. Companies should try and improvise on the services
2. Location of the outlet should be strategically designed
3. Engagement of Consumers to curb billing time
4. Loyalty schemes to be emphasized
5. Exclusive brands corner to be displayed
In nutshell the retail outlets should focus on this complete experience so as build a
strong customer base.

72
ICADABAI 2009 – Abstracts

A Statistical Analysis for Understanding Mobile Phone


Usage Pattern among College-Goers in the District of
Kachchh, Gujarat.

Fareed F Khoja & Surbhi Kangad


SRK INSTITUTE OF MANAGEMENT AND COMPUTER EDUCATION, GANDHIDHAM
Kachh, Gujarat
E-mail: ffkhoja@gmail.com

India is one of the fastest growing telecommunication markets In the world. It is


the youth which is the real growth driver of the telecom industry in India. Considering this
fact, the present paper is an attempt to give a snapshot of how frequently young people use
their mobile phones for several embodied functions of the cell phones. Data was
collected from a sample of 208 mobile phone owners, aged between 20 and 29. The
study sheds light on how gender, monthly voucher amount and years of owning mobile
phones Influence the usage pattern of this device. The findings show that there is a
significant difference in the usage pattern of mobile phones because of these three
variables. Findings of the study would be helpful for the telecom service providers and
handset manufacturers to formulate a marketing strategy for different market segments. This
paper of ours tries to use the statistical tools for understanding the consumer behavior for
formulating the marketing strategy.

The paper throws light on mobile phone consumption pattern among college-goers.
Understanding youngsters as one of the market provides a competitive advantage to them.
The study reveals how gender, monthly voucher amount and years of owning mobile
phones influence the usage pattern of this device. Findings of the study would be helpful
for the telecom service providers and handset manufacturers to design a mix of product
and promotion, for different market segments. Also, research undertaken in this area
helps researchers and scholars understand the individual usage pattern of a new media

Keywords:- Consumption pattern, Statistical Inferences, Statistical Tests – T – Test, F-


Test, and Statistical Parameters.

73
ICADABAI 2009 – Abstracts

Exploring the Factors Affecting the Migration from


Traditional Banking Channels to Alternate Banking
Channels (Internet Banking, ATM)

Amol G, Aswin T, Basant P, Deepak S & Harish D


Indian Institute of Management, Ahmedabad

In today’s competitive world everyone wants a greater pie in the value derived from
providing the consumer needs. The banks have evolved over a period of time from being
just a product provider to a becoming a combination of service and product provider.
One way of deriving maximum value is by reducing the cost of offering the service. This
has lead to the birth of alternate mode of channels for banking. This report tries to looks
at some of the factors, which enable and deter the adoption of these alternate channels
in the Indian context. A literature survey was done to get an idea of various factors which
may affect the adoption of alternate channels. Then in-depth interviews were conducted
to get insights regarding the attitude, motivational, and behaviour aspects of the
adoption of alternate channels. Factor analysis followed by multi-variate regression lead
to the results that factors of benefit awareness, easy accessibility, self-image motivation,
ease of instructions, time saving, perception of future substitution and perception of
human element have been seen to be very important.

Key words: Factor Analysis, Multi-variate Regression, Consumer Needs

74
ICADABAI 2009 – Abstracts

Weather Business in India – Potential & Challenges

Pratap Sikdar
Express Advisory Services Private Limited (Express Weather)
Salt Lake City, Kolkata, West Bengal

www.expressgrp.com

In this Business case we have discussed the potential and challenges of Weather
business in India. It is evident that even though the usefulness is understood for the
different market segments but the effort in improving the service has not reached the
satisfactory level because of quality weather data. This is where we emphasized highly
in developing quality weather forecasts.

Developing quality weather data is a part of the total initiative taken by Express, the
other important facet of the initiative lies in the proper packaging and dissemination of
the data and service which could be easily got implemented in the existing system of the
clients.

The Indian market is in its fledging existence, and the perception which the target
segments have is not matured upto the desired extent. It’s a cumbersome task for
Express to make the target segments understand the benefits of such use of location
specific weather forecast information in their operational areas and its application in the
decision support system of the client.

We have discussed one such successful application case of weather service in the value
chain of an agrochemical company which has reaped immense benefits in its marketing
value chain.

Keywords: Weather forecasting, Energy weather, Agrochemical, Weather business

75
ICADABAI 2009 – Abstracts

Understanding of Happiness among Indian Youth: A


qualitative approach

Mandeep Dhillon
ICFAI National College, Chandigarh

This qualitative study explored what Indian Youth think about Happiness. Eight hundred
(800) students wrote free-format essays in response to a simple open ended question,
“what is happiness”? All these essays were coded using thematic analysis. Using
thematic analysis main themes were found, (1) Happiness is a state of satisfaction,
positive feelings and contentment. (2) Happiness is goal achievement and sense of
accomplishment. (3)Social capital (i.e. family and friends) is more instrumental in
happiness than financial capital. (4)Happiness comes from spiritual enrichment, freedom
from ill-being i.e. being healthy. These themes were discussed in the context of Indian
philosophical and spiritual views of happiness.

Keywords: Subjective well being, Life satisfaction, Indian philosophy

76
ICADABAI 2009 – Abstracts

Analytical Approach for Credit Assessment of


Microfinance Borrowers

Keerthi Kumar and M.Pratima


ICICI Bank

Traditional models of Microfinance which include Group Lending offer substantial scope
for one-to-one customer interaction leading to customer credit assessment. However,
with larger banks venturing into the sector - a greater focus on sustainable finance was
sought and consequently the need for statistical tools for credit assessment was felt. In
this direction, this study has been conducted by ICICI Bank with one of its key MFI
partners.

The objective of this study was to identify borrower characteristics which distinguish
good customers who are bankable.

Cross-sectional data of One Lakh borrowers was collected by the MFI. Multivariate
analysis was conducted which led to useful insights. Clients residing in better housing
conditions showed lesser probability of default. Older borrowers (by age) were observed
to have a higher credit quality. Moreover when the sons of borrowers were in their
working age, the repayment performance was superior. Another interesting result was
clients residing near Primary Health Centers showed better repayment performance.

However it may be noted that the results hold for the MFI in question and may not be
extended to microfinance lending in general.

Keywords: Micro Finance; Group Lending; Credit Assessment; Loan Repayment

77
ICADABAI 2009 – Abstracts

Data Mining & Business Intelligence in Healthcare

Sorabh Sarupria
Product Practice team,
Healthcare and Lifesciences,
Syntel Inc.

Topics Discussed
This paper (poster) discusses data mining and its applications within healthcare in major
areas such as the evaluation of treatment effectiveness, management of healthcare,
customer relationship management, and the detection of fraud and abuse. The paper
(poster) highlights the limitations of data mining and discusses some future directions.

Major conclusions

Treatment effectiveness: Data mining applications can be developed to evaluate the


effectiveness of medical treatments. By comparing and contrasting causes, symptoms,
and courses of treatments, data mining can deliver an analysis of which courses of
action prove effective.

Healthcare management: To aid healthcare management, data mining applications can


be developed to better identify and track chronic disease states and high-risk patients,
design appropriate interventions, and reduce the number of hospital admissions and
claims.

Customer relationship management: As in the case of commercial organizations, data


mining applications can be developed in the healthcare industry to determine the
preferences, usage patterns, and current and future needs of individuals to improve their
level of satisfaction.

Fraud and abuse: Data mining applications that attempt to detect fraud and abuse often
establish norms and then identify unusual or abnormal patterns of claims by physicians,
laboratories, clinics, or others. Among other things, these applications can highlight
inappropriate prescriptions or referrals and fraudulent insurance and medical claims.

Keywords: Data mining methodology and techniques, Data mining applications,


Predictive modeling

78
ICADABAI 2009 – Abstracts

Business Intelligence in Customer Relationship


Management, A Synergy for the Retail Banking Industry

Chiranjibi Dipti Ranjan Panda


ICICI Bank Ltd.
Business Intelligence Unit (BIU)Mumbai

Incorporating customer value strategies into a product-driven business model has


always been a predicament for the Retail Banking industry, as has been proven by
recent SAS (Statistical Analytical Software) studies. Even ICICI Bank, the second
largest private sector bank in India & the pioneer in implementing Business Intelligence
& Analytics, has struggled to filter the huge volume of data at hand for their
interpretation in the lights of modern day flatter & leaner management to cope up with
the ever growing competition. Organizing, enhancing interpretation & communication of
the data acquired through Business Intelligence (BI), therefore, become critical for
sustaining competitive advantage in a long run. The success of BI initiatives can only be
attained by ensuring adequate user involvement, sufficient funding, management
support & right choice of technology. Incremental improvement of the existing business
models is consequently necessary & for attaining so, three BI models are henceforth
suggested for realistic implementation:-

BI 4A model:-
 Approach: Monitoring, analytical & predictive intelligence.
 Acumen: BI investment alignment with strategic goals.
 Assumption: Willingness to identify & solve business problems.
 Activation: Pre- & post-launch strategy for successful implementation of BI
process.

Cause & effect model (Imperatives & implications):-

Steps Cause (Imperatives) Effect (Implications)


1 Creation of a Customer focused Establishment of data structure for
model customer single view.
2 Having a clear image of Implementing analytics to support
customer category customer segmentation.
3 Accessing the life time value of Analysis & prediction of risk &
the customer profitability
4 Maintaining profitability of each Maximizing cross sell & up sell
customer relationship initiatives
5 Understanding how to attract & Preparation of customer retention
retain the customer model
6 Maximizing ROI on marketing Integrated campaign management
campaigns

79
ICADABAI 2009 – Abstracts
Process & technology flow model:-

1. Customer information
2. Segmentation
3. The Game plan
4. Identifying the high-end customers
5. Tracking
6. Reaching to the customer
7. Alternate channels
8. Delivery of the product/services
9. Feedback
10. Product/service innovation

Key words: BI Analytics, BI 4A Structure, The Cause & Effect Model

80
ICADABAI 2009 – Abstracts

‘Competitive Intelligence’ in Pricing Analytics

Chetna Gupta1 and Abhishek Ranjan2


Dell Global Analytics
1Chetna_gupta@dell.com, 2Abhishek_Ranjan@dell.com

Pricing Analytics is a core activity within any organization. It becomes even more critical
in a commodity market where customers are extremely price sensitive. Right pricing gets
Market share, Increased Revenue and Profitability to a business. ‘Competitive
Intelligence’ (CI) is a critical process by which management assesses the capabilities
and behavior of its current and potential competitors to assist in maintaining or
developing a “competitive advantage”. Pricing decisions in any Company are based on
Competitive Intelligence besides Cost Analytics, Technology Transitions, Product
Positioning and Business Strategy. It provides actionable insights on competing products
for pricing decisions besides supporting defensive Competitive Intelligence.
There are four stages in monitoring competitors - the four "C"s:
• Collecting the information,
• Converting information into intelligence by Collating,
• Cataloguing, Interpreting and Analyzing it by data visualization,
• Communicating the intelligence and countering any adverse competitor actions
Competitive indices often triangulated with P&L line items give valuable insights to
improve profitability. They are used in several industry verticals like Retail, Telecom,
Airlines etc. helping them in demand shaping, understanding their own product portfolio,
competitors’ response to their pricing actions and the implications of change in
competitive environment on the different products which the industry offers.

Key Words: Pricing Analytics, Data Visualization, Computational Intelligence

81
ICADABAI 2009 – Abstracts

Retail Analytics and ‘Lifestyle Needs’ Segmentations

Sagar J Kadam1* and Biren Pandya2


Clear Cell Group, Sampatti, Sardar baug Lane, Alkapuri, Vadodara

1 sagar.kadam@clearcellgroup.com
2 biren.pandya@clearcellgroup.com

Customer Segmentation is a well known tool used by retailers globally to understand


their customers. Clear Cell develop segmentations with retailers in order to create a
deep understanding of their customers and then use this to improve decision making
focused around their Value Levers.

Value levers are the areas within a business which are positively impacted by customer
insight: Pricing, Promotions, Store location & Layouts, Assortment and Communication.
The basis of the Lifestyle Needs segmentation is that a customer’s lifestyle needs can
be defined by their grocery purchases, effectively “you are what you eat.” Even the
simplest approach to create needs segmentation requires both sound statistical analysis
along with regular inputs from business users.

In this paper, we have described the Analytical approach to create a Lifestyle Needs
Segmentation. Product association methodology was used to identify trends in the
contents of the grocery baskets. Final segments were obtained by using cluster analysis
to group segments with similar lifestyle needs but potentially different products
purchased. Stability and robustness of final segments were established through different
period rollout and observing how customers move between segments from period to
period.

Keywords: Retail Analytics, Value Levers, ‘Lifestyle Needs’ Segmentation, Cluster


analysis, Triangulation process

82
ICADABAI 2009 – Abstracts

Revenue/Profit Management in Power Stations by Merit


Order Operation

E.Nanda Kishore
NTPC Ramagundam, Employee Development Center

erukullank@yahoo.com

Application of Classical Linear Regression model in Power station Operations


Management is discussed. The structural change in regression (with coal
consumption as independent variable and power generation as dependant
Variable) is checked. This experiment helped in deciding generation levels during
backing down and increasing load to full level.

Results:
1. There is structural change in regression at 450mw. The slope of equation
is significantly different in both regions (<450mw and >450mw).
2. Hence Coal consumption pattern is significantly different in one region
than that of other.
Topics Discussed:
 Classical Linear regression model.
 Dummy variables Regression model.
 Checking for Structural change in Regression.
 Other aspects like MAPE etc.
Main Conclusions:
Above experiment proved that there is change in behavior of unit at 450mw. This
unit load is to be reduced upto 450mw more rapidly during backing down. Other
units load can be reduced for further load reduction had they also show similar
behavior. If unit load were to be reduced to 425mw and then to be raised with
load demand this unit load has to be increased immediately upto 450mw as it
can be done with less amount of coal.

Key Words: Coal Consumption, Backing down, Structural change, Classical


Linear Regression Model.

83
ICADABAI 2009 – Abstracts

How to handle Multiple Unsystematic Shocks to a Time


Series Forecasting System - an application to Retail
Sales Forecasting

Anindo Chakraborty
Target Corporation, India – Bangalore
Anindo.Chakraborty@target.com

Ordinarily, time series methodologies like Box-Jenkins or Smoothening methods


do a good job when it comes to forecasting sales for a retailer. However for
retailers the future sales cannot only be predicted using historical sales in time
series. It is hugely affected by various critical factors like a competitor opening a
store, cannibalization from sister stores, re-model or relocation of stores just to
name a few. These “shocks” to the system are not seasonal and neither do they
similarly affect the sales for every store. Market research has revealed that the
competitor density, demographics, store type and maturity are the most important
factors determining how such “shocks” affect retail sales.

The effect of shocks can be estimated using a simple objective segmentation


technique like CART. Using CART, with year-over-year sales change as the
dependent variable and other factors as independent variables, would create
different segments of stores with varying impacts. These impacts measure %
change in year over year sales. Some segments would show a highly positive
impact whereas some highly negative. When these impacts when applied over
the time series forecasts can improve (reduce) the MAPE by 3 to 4 percentage
points.

Key words: Retail sales forecasting, Classification & Regression Trees, Multiple
shocks to time series, Segmentation as a tool to reduce MAPE

84
ICADABAI 2009 – Abstracts

A Model using scientific method to cut down costs by


efficient design of supply chain in Power Sector

U.K.Panda GBRK Prasad A.R Aryasri,


APERC, Hyderabad Dr. Reddy Labs, Hyderabad JNTU,Hyderabad
ukpanda@yahoo.com gbrkprasad@rediffmail.com aryasri@yahoo.com

This paper gives an overview of Supply Chain Operations Reference Model


(SCOR), Different Cost Components of tariff and Models that are required to
arrive at Tariff Formulation and identify potential areas of the energy utilities
companies (Genco, Transco, Discoms) to optimize Costs in Power Sector.

Key Words: Tariff, Power purchase model, Sales Module, Revenue Model, Tariff
Schedule

85
ICADABAI 2009 – Abstracts

Clustering as a Business Intelligence Tool

Suresh Veluchamy1, Andrew Cardno2, Ashok K Singh3


1
Einksoft Technologies Private Limited, India
2
BIS2 Solutions, San Diego, USA
3
UNLV, USA

Large store chains deploy sophisticated forecasting and planning systems based
on a clustering or grouping of their stores. Large stores with customer loyalty
cards also use customer transaction data for segmentation of customers in order
to improve their marketing and increase membership into their loyalty programs.
The groupings of stores or customer segments, quite often, are formed by
simplistic methods, with clusters formed by all stores (or customers) in a
geographic location such as a zip code, or by a ranking of stores by total sales.
These clustering approaches typically ignore a large amount of data collected by
the store chains. This data is multivariate in nature, with variables representing,
for example, amounts sold by individual stores for various product categories.

Cluster analysis is a data mining tool that uses the correlation among different
variables in a database to form store clusters or customer clusters. Forecasting
or planning systems that utilize this added information obtained from statistical
clustering will lead to increased potential for growth.

In this paper, we describe a method of clustering stores or customers based


upon transactions data. We illustrate the method using a simulated example.

Keywords: Cluster Analysis, Non-hierarchical, K-means clustering

86
ICADABAI 2009 – Abstracts

Validating Service Convenience Scale and Profiling


Customers: A Study in the Indian Retail Context

Jayesh P Aagja
Institute of Management,
Nirma University
jayeshaagja@gmail.com

Toby Mammen Amit Saraswat


Marketing Area Marketing Area
ICFAI Business School, ICFAI Business School,
Ahmedabad Ahmedabad
toby.mammen@gmail.com saraswatamit@gmail.com

Post Liberalization, the Indian economy has seen intense competition due to the entry of
new players in most sectors. But in the current recessionary phase which set in 2007,
most service organizations were trying to hold their market share than increasing it. The
objective of this study is to validate a service convenience scale in the Indian organized
food & grocery retail context, and develop linkage between service convenience on one
side, and satisfaction / behaviourial intentions on the other. A convenience sample was
drawn from SEC A & SEC B from various parts of Ahmedabad city, with experience of
shopping from organized retail food & grocery outlets. The samples, drawn in two
phases – the first during Jan-March 2008, the second between Feb-March 2009, had
270 and 326 respondents respectively. As in the original scale, through the scale
validation process five dimensions emerged though with 15 items instead of the original
17 items (Seiders et al, 2007). Neural networks used for nomological model testing
indicated a good model fit. Subsequently, an attempt was made to segment respondents
based on their service convenience scores which resulted in four groups for both the
datasets. Statistically insignificant differences were observed amongst these clusters
based on demographics.

Key words: Service Convenience, Validation, Cluster Analysis, Artificial Neural


Networks.

87
ICADABAI 2009 – Abstracts

A model for Classification and Prioritization of customer


requirements in the value chain of Insurance industry

Shivani Anand1, Sadhan K De2, Saroj Datta3


1, 2
Indian Institute of Technology, Kharagpur
3
Faculty of Management Studies, Mody Institute of Technology and Science

E-mail:
1shivanianand83@gmail.com, 2drskde@vgsom.iitkgp.ernet.in, 3dean.fms@mitsuniversity.ac.in

The purpose of this paper is to study the different enhancements/requirements in the


value chain of the Insurance Industry as suggested by the customers and to find a model
to classify and prioritize them in order to maximize customer satisfaction.
There were three main conclusions drawn from the study. First, there exists a distinctive
trend in the requirements which can be differentiated and recognized by careful
observation of certain parameters. Second, certain requirements which were highly
correlated as these requirements were found to complement each other in their
implementations which lead to an exponential increase in customer satisfaction. On
similar lines, we also found pairs of negatively related requirements as they would
diminish the implementation impact of each other on customer satisfaction. Third, we
perceived that similarly classified requirements produced similar customer satisfaction.
Primarily, the study elucidated four main categories of requirements. These were
process gaps, efficiency, regulatory and breakthrough requirements.
This particular work will allow an organization to have better understanding of the
demands of their customers and enable it to satisfy user needs within organizational
constraints by way of a model to classify and prioritize requirements suggested by
customers to maximize customer satisfaction.

Keywords: classification and prioritization of requirements, process gaps, K-means


clustering, customer satisfaction.

88
ICADABAI 2009 – Abstracts

On the Folly of Rewarding Without Measuring: A Case


Study on Performance Appraisal of Sales Officers and
Sales Managers in a Pharmaceutical Company

Bhavin Shah,
B. K. Majumdar Institute of Business Administration
H. L. College Campus, Navrangpura
Ahmedabad, Gujarat
Email: bhavin.shah@bkmiba.edu.in

Ramendra Singh
IIM Ahmedabad
Ahmedabad, Gujarat
Email: ramendras@iimahd.ernet.in

This case study highlights the performance appraisal measurement issues and
challenges for the sales force of a pharmaceutical company, Pharmex (name disguised).
We found that Pharmex was measuring too many (41) qualitative (e.g. competencies,
skills and job knowledge) aspects for Business Officers (BOs) and Area Business
Managers (ABMs), using inconsistent measures. ABMs had a tendency to score their
supervisee BOs higher on qualitative aspects, since such skills and competencies were
difficult to measure accurately, compared to more specific measures on numerical
quotas. Due to this measurement dichotomy between Part A (qualitative) and Part B
(numerical/quantitative) little correlation was found between these two parts of the
performance appraisal. Multiple regression analysis also suggested that little variance in
performance is explained by efforts, activities or qualitative aspects of BOs and ABMs.
Such measures which varied with appraisal aspects led to inconsistencies, and therefore
are likely to be faulty. Like Pharmex, other organizations too may be measuring
performance appraisal of their sales force using unreliable and invalid measures, leading
to erroneous managerial decisions about its salesforce’s performance, and its
consequent rewards. Efforts should be made by organizations on making performance
appraisal measures more scientific, and robust.

Key Words: Correlations, Factor Analysis, Multiple Regressions, Performance Appraisal.

89
ICADABAI 2009 – Abstracts

The Format or the Store. How Buyers Make their


Choice?

Sanjeev Tripathi, P. K. Sinha


Indian Institute of Management, Ahmedabad
Gujarat

E-mail: sanjeev@iimahd.ernet.in

The literature on store choice has mainly studied the store attributes, and ignored the
consumer attributes in store choice. Even when, the consumer attributes have been
incorporated the strength of relationship has been weak. Also, the literature on store
choice has completely ignored format choice, when studying store choice.

The paper argues for incorporating both the shopper attributes in store choice, and the
store formats. Shopper attributes can be captured through the demographic variables,
as they can be objectively measured, and these also capture a considerable amount of
attitudinal and behavioural variables. The paper proposes to link store choice, format
choice and consumer demographic variables, through a hierarchical logistic choice
model in which the consumers first choose a store format and then a particular store
within that format.

A nested logit model is developed, and the variables predicting the choice probabilities
are identified. The requirement of data for the empirical analysis is specified, the model
has not been verified in the absence of empirical data but the operationalization of
variables is done.

Keywords: Format choice, hierarchical choice model, nested logit, shopper attributes,
store attributes, store choice.

90
ICADABAI 2009 – Abstracts

Consumer Involvement for Durable and Non Durable


Product: Key Indicators and It’s Impact

Sapna Solanki
Sanghvi Institute of Management & Science
Indore.

E - ma il : sa p na . so la nk i5 9 4 @g ma il .c o m, sa p na a _ so la n ki @re d if f ma il .c o m

Involvement refers to how much time, attention, energy and other resources people
devote for purchasing or learning about the product or it is one of the fundamental
concepts used to explain the consumer buying process. Study finds that how level of
consumer involvement for durable and non durable product influence by financial risk,
performance risk, physical risk, social risk, time risk, uncertainty in selection,
psychological risk, previous Shopping experiences, product attribute, situation, brand
personality, hedonic value, motivation, level of learning, utility of the product , price,
durability, gift (for whom a product is purchased), life style, store, frequency of use,
additional benefits, packaging and endorsement. A self design opinionnaire was framed
to find out the various dimensions influencing on consumer involvement. For dependent
variable (consumer involvement) Zaichkowsky’s (1985) unidimensional a 20 items
bipolar likert scale called Personal- Involvement-Inventory (PII) was adapted. Stepwise
regression was used for both durable and non durable product categories. This
regression method suggested four models for each product. The best model suggests
that level of consumer involvement while purchasing garments is influenced by
Previous Shopping Experiences Hedonic Value, Special Offer and Uncertainty and for
Laptop models explains Brand Personality, Hedonic Value, Frequency of Use and
Durability are the core predictors.

Key Word: Consumer Involvement, Level of Learning, Social Risk and Experience.

91
ICADABAI 2009 – Abstracts

Development of Utility Function for Life Insurance


Buyers in the Indian Market

Goutam Dutta
Indian Institute of Management
Ahmedabad, Gujarat
E-mail : goutam@iimahd.ernet.in

Sankarshan Basu
Indian Institute of Management
Bangalore

Jose John
Indian Institute of Management
Ahmedabad, Gujarat

Insurance as a financial instrument has been used for a long time. The
dramatic increase in competition within the insurance sector (in terms of providers
coupled with awareness for the need for insurance) has concomitantly resulted in more
policy options being available in the market. The insurance seller needs to know the
buyer’s preference for an insurance product accurately. Based on such multi-criterion
decision-making, we use a logarithmic goal programming method to develop a linear
utility model. The model is then used to develop a ready reckoner for policies that will aid
investors in comparing them across various attributes.

Keywords: Goal programming, Multi-criterion decision making, Utility function

92
ICADABAI 2009 – Abstracts

A RIDIT Approach to Evaluate the Vendor Perception


towards Bidding Process in a Vendor-Vendee
Relationship

Sreekumar
Rourkela Institute of Management Studies
Rourkela
E-mail: sreekumar42003@yahoo.com

Ranjit Kumar Das


College of Engineering and Technology
Bhubaneswar

Rama Krishna Padhi


National Productivity Council
Bhubaneswar

S.S. Mahapatra
Department of Mechanical Engineering
National Institute of Technology
Rourkela
E-mail: ssm@nitrkl.ac.in ; mahapatrass2003@yahoo.com

In today’s competitive business environment where the organisations are competing on


effectiveness of their supply chain, a better vendor-vendee relationship can provide an
edge to one over other. Critical review of literature shows that many studies are made on
vendor evaluation and selection but little study has been devoted to vendor perception
and satisfaction. This study aims at evaluating vendor perception towards the vendor-
vendee relationship. A ridit approach is used to rank the parameters under one of the
dimension viz. bidding process. Organisation can focus more on the top ranked
parameters to have the better relationship with the vendor.

Key Words: Bidding process, ridit, technical specifications, vendor perception.

93
ICADABAI 2009 – Abstracts

Linear Probabilistic Approach to Fleet Size Optimisation

Rakesh D. Raut1, Ashif J. Tadvi2, Prashant Singh3


NITIE Mumbai
Email: 1rakeshraut09@gmail.com, 2 tadvi.ashif@gmail.com, 3prash83singh@gmail.com

As a well-structured and costly activity that pervades industries in both the public and
private sector, vehicle fleet management would appear to be an excellent candidate for
model-based planning and optimization. And yet, until recently the combinatorial
intricacies of vehicle routing and of vehicle scheduling have precluded the widespread
use of optimization (exact) methods for this problem class. The objective of paper
minimising freight on a day t subjected to mathematical & existing business constraints
gives a fleet requirement on a day t. Thus we have fleet requirement for 6 days. Now
turnaround time analysis has been done. The weightage turnaround time of quantities
send to different destinations comes about to be 2.89 days which has been rounded to
three days that means truck dispatched from hub on Monday would be again available
on Thursday. Using this analysis, Initial solution is reached which gives the initial fleet
mix.

Now, existing clubbing zones are identified & a search is made on the last three months
dispatch data where clubbing would have been possible & is not being taken care of in
the initial model. The initial model would have selected a two smaller vehicles where as
the clubbed locations could be served by a single large vehicle. Thus initial solution is
modified to attain final fleet mix. The results obtained are validated by examining the
dispatch pattern of the last three months by considering the average breakeven volumes
(for all destinations) & maximum dispatches occur for the type of truck which is required
in maximum quantity & vice-versa.

Key words: Minimising freight; Optimization methods; Vehicle Routing; Vehicle


Scheduling

94
ICADABAI 2009 – Abstracts

Optimisation of Manufacturing Lead Time in an Engine


Valve Manufacturing Company Using ECRS Technique

Manikandan T.1, Senthil Kumaran S.2


College of Engineering Guindy, Anna University Chennai, Chennai, Tamilnadu.
E-mail: 1tmani4884@yahoo.com, 2 metrosenk@yahoo.com

There has been a constant need for an efficient and robust scheduling
technique to reduce the lead time and increase the productivity. This paper focuses on
reducing the Manufacturing Lead Time in a production layout of an engine valve
manufacturing company. The layout experiences problems like high inventory, higher
setup time and more part travel, etc. A six sigma tool, DMAIC is used to approach the
problem. Details with regard to Product mix, Volume of production, Work – In – Progress
(WIP), and Sequence of operations were collected. And a simple heuristic technique
was proposed to decrease the Manufacturing Lead Time and the approach was
validated using WITNESS, a simulation software. The simulation of the proposed
heuristic technique in WITNESS software show considerable reduction in Work – In –
Progress (WIP) and thereby resulted in monetary benefits. When this proposed heuristic
technique was implemented, the situation demanded a lean tool, Eliminate – Combine –
Rearrange – Simplify (ECRS). In total, the setup time for a particular process step
reduced from the existing 20 hours per month to 6 hours per month.

Keywords: Batch size, Six Sigma tool, WITNESS software, Inventory reduction.

95
ICADABAI 2009 – Abstracts

Efficient Decisions Using Credit Scoring Models

Jayaram Holla
Shrikant Kolhar
Srinivas Prakhya

Indian Institute of Management, Bangalore

Information-based strategies have proved to be extremely successful in the credit indus-


try. The first step in deploying such a strategy is the development of a robust credit-
scoring model. Scoring approaches provide two primary advantages. The first advantage
is in improving profitability by using the predictive power of the model. The second
advantage lies in the highly consistent, objective and efficient manner in which credit
decisions are made. Modeling is used to accept or reject applications and usually not in
the post- disbursement phase. Credit-scoring models can also be used post
disbursement to enhance efficiency in allocation of collection resources. Literature has
not paid much attention to this aspect. How can posterior beliefs about consumers
propensity to default be used to develop a collection strategy?

In this paper, an ordered model that classifies applicants into good, marginal and bad
categories is developed. A model of repayment behavior is proposed where variation in
willingness and ability to repay is explained by individual specific factors and
heterogeneity. Category probabilities are derived assuming that the categories are
ordinal. The model explains more variation than standard two category models used in
the industry. The model estimates are robust as evidenced by results when deployed on
a validation sample. The model has the additional benefit of being useful in allocation of
collection resources post-disbursement. The model can also be used in conjunction with
other information to design cross-selling programs. The final model could be the core for
assessing value-at-risk and moving to risk-based pricing. Information-based strategies
are knowledge intensive. Firms deploying such strategies are in a learning-by-doing
mode, resulting in the accumulation of tacit knowledge that is an inimitable resource.
Development of inimitable resources is a key factor in obtaining sustainable competitive
advantage.

Keywords: Credit Decisions, cross-selling, risk-based pricing.

96
ICADABAI 2009 – Abstracts

Improving Predictive Power of Binary Responce Model


Using Multi Step Logistic Approach

Sandeep Das
Analytics, Genpact India
Rajarhat, Kolkata

E-mail: sandeep.das1@genpact.com

This paper discusses a methodology called “Multi Step Logistic Regression” to improve
the predictive power of the binary logistic regression model in terms of a higher Hit/Miss
ratio. A ‘Hit’ is defined as right classification/tagging and a ‘Miss’ is defined as wrong
classification obtained from cross tabulation between actual vs. predicted tagging. In this
approach, after choosing the final cut logistic model, the model building population is
segregated into two parts – predicted 1 and predicted 0 by selecting a cut off on
predicted probability distribution. For predicted 1 group, parameter estimates are re-
estimated keeping the same variables came significant for initial model. User may
choose to introduce new variables in each iteration and keep them in the model as per
significance. These steps are iteratively repeated till we get a good cost-benefit cause to
stop. The conventional logistic method (single step) doesn’t help to tackle a situation
where the proportion of 1 & 0 distinctly different or cost of misallocation is high. To tackle
such a situation, we will discuss this alternative approach. This paper targets to
improve the concentration in Hit cells with (without) tolerable/regulated (alarming)
increase in concentration of misclassification compared to the Single step approach.

KEY WORDS: Multistep Logistic, Binary Response Model, Improving predictive power of
Probability of Default (PD) model.

97
ICADABAI 2009 – Abstracts

Net Opinion in a box

Nimisha Gupta*1, Vamsi Veeramachaneni*, O.M.V. Sucharitha*, Ramesh


Hariharan*, V. Ravichandar+, Saroj Sridhar+, T. Balaji+

*Strand Life Sciences


Bangalore

+Feedback Consulting
Bangalore

E-mail: nimisha@strandls.com

Market research plays an integral role in the product development lifecycle. One of the
goals of market research is to understand the likes and dislikes of customers and identify
the features that need to be added or enhanced. Traditional market research involving
survey design, focus groups and quantitative research can be very expensive.

In this paper, we describe, Net Opinion in a Box (NOB), an opinion mining platform that
can aid the market research process. Using Natural Language Processing (NLP)
technologies, NOB extracts opinions expressed on Web 2.0 platforms like blogs, product
forums, and social networking sites. An optional curation module can be used to
manually improve the precision of NLP results. The opinions about specific features of
products are stored along with relevant meta-data like publication date, author location,
product brand, model etc.

These sentiments are presented to the end user in an intuitive web based visualization
dashboard. The dashboard allows users to apply filtering criteria to examine all aspects
of a product, perform a side-by-side comparative analysis of different brands, and study
how the opinions about a brand change with time. The interface allows users to drill-
down to the actual sentences and provides links to the source site.

The entire pipeline from product definition to publishing can be configured and monitored
via simple web-based user interfaces. The platform is currently being used to power
opinion research for 17 products at http://www.feedbackstrands.com/.

Keywords: Sentiment analysis, Opinion extraction, Natural Language Processing,


Dashboard Visualization, Manual curation, Data Retrieval

98
ICADABAI 2009 – Abstracts

Using Investigative Analytics & Market-Mix Models for


Business Rule & Strategy Formulation – A CPG Case
Study
Mitul Shah*,
Infosys Consulting, Bangalore.
E-mail: Mitul_shah@infosys.com
* Corresponding author

Jayalakshmi Subramanian,
Infosys Consulting, Bangalore.
E-mail: Jaya_s@infosys.com

Suyashi Shrivastava,
Infosys Technologies Limited, Bangalore
E-mail: Suyashi_S@infosys.com

Kunal Krishnan,
Infosys Technologies Limited,
Bangalore .
E-mail: Kunal_Krishnan@infosys.com

FMCG (or CPG) companies spend anywhere between 12 to 25% of their revenue in
various marketing activities which drives 40-60% of the sales volume. Annual budgeting
in this industry is of paramount importance and sets the tone of marketing initiatives for
the rest of the year. Plethora of metrics and models are deployed in understanding the
effectiveness of campaigns and promotions enabled by powerful IT tools. Yet, more
often than not, category managers are left with open questions – which are not
explained by any metrics or regular market mix models. CPG companies are
increasingly looking for investigative analytics in understanding market drivers and how
they are influenced by various promotional activities. It influences annual budgeting
activity and effective allocation of Trade Funds. The paper discusses how to use
investigative analytics in aiding such strategic decisions with the help of a case study.

The study highlights a process which aims to answer a simple question – What is the
extent to which market drivers influence the sales of two top brands and where
organization can focus its marketing spend for each brand in next one year. The study
used live data from a leading Consumer packaged goods (CPG) company. The study
also aimed at preparing the hierarchy of the key factors that influence sales of each
brands. The study was aided by statistical tools SAS 9.2, SPSS 17.0 and iCAT platform
developed by Infosys Product Incubation & Engineering Group.

Key Words: CPG Industry, Strategy Formulation, Understanding Market Drivers,


Marketing Spend Allocation, Analytics & Market Mix Models

99
ICADABAI 2009 – Abstracts

Improve Dispatch Capacity of Central Pharmacy

Ruhi Khanna, Atik Gupta, Devarati Majumdar, Shubhra Verma


Max Healthcare Institute Ltd
New Delhi

Email: ruhi.khanna@maxhealthcare.com

To increase the new patient enrollments and to retain existing patients, healthcare
organizations are increasingly becoming patient-centric, as word-of-mouth publicity
affects the market-share of a healthcare set-up, most largely. It has thus become of
equal importance to optimize the performance with respect to quality. One of the means
to assess patient expectations is Feedback forms. Through these feed back forms the
Voice of Customer revealed high dissatisfaction amongst patients on Max, Chemist
Experience

The CTQ drill down suggested that the ‘Low Dispatch Capacity of Central Pharmacy’
was directly impacting Customer Satisfaction at the different Pharmacies. With the help
of LEAN Six Sigma we achieved increase in dispatch capacity of Central Pharmacy to
satellite pharmacies by 37% against the target of 32 %. The post project analysis of
Patient feedback revealed an immediate impact of 12% increase in Customer
satisfaction and reduced negative VOCs.

Keywords: CTQ, Healthcare, LEAN Six Sigma

100
ICADABAI 2009 – Abstracts

Application of Neural Networks in Statistical Control


Charts for Process Quality Control

Chetan Mahajan

Business Analytics & Research, Fidelity Investments, Bangalore


Bangalore, Karnataka
Email: cvmahajan[at]gmail.com, cvmahajan[at]yahoo.com

Prakash G. Awate

Mechanical Engg. Department, IIT BOMBAY, Mumbai


Industrial Engg. & Operations Research Group, I.I.T. BOMBAY, Mumbai
Emai:l awatepg[at]iitb.ac.in

The use of neural networks for pattern classification/recognition in statistical


quality control charts is considered in the context of computerized systems having
automated inspection and on-line decision making capabilities in real time.

The objective of the study is to obtain the best network configurations to detect
the different out-of-control patterns present in x bar control charts after investigating in
detail several aspects and issues concerning use of multi-layer feed forward networks.

Our experimentations indicated that for preventing neural networks from


mistaking one type of pattern for another, it was crucially important to employ two hidden
layers in the feed forward neural network. Further it was found that the sudden shift
(upward or downward) pattern is relatively more difficult to learn for the neural networks.

The multi layer feed forward network with standard back-propagation algorithm
training was employed to detect the non-random patterns in x bar control chart. The
structures as well as neurons’ parameters in the network were obtained through
extensive simulations of learning and testing.

Keywords: Pattern Recognition, Multi-layer Feed Forward Network, Quality


Management, Statistical Quality Control, Artificial Intelligence

101
ICADABAI 2009 – Abstracts

Measurement of Risk and IPO Underprice

Seshadev Sahoo1 & Prabina Rajib 2

1
Vinod Gupta School of Management (VGSOM) IIT Kharagpur
Institute of Management & Information Science, Bhubaneswar, Orissa.

Email: seshadev@vgsom.iitkgp.ernet.in seshadev@rediffmail.com

2
Corresponding Author, Vinod Gupta School of Management, Indian Institue of
Technology, Kharagpur, West Bengal.

Email: prabina@vgsom.iitkgp.ernet.in prabina.iitkgp@gmail.com

Empirical studies on IPO underprice anomaly in recent years have closely examined use
of various proxies for risks, but none of which seems to explain significant portion of
underprice. The paper seeks to shed light on this controversy by taking a sample of 92
IPOs issued in India during 2002-2006. We examine suitability of high price deflated to
low price (H/L) as risk surrogate to explain underprice. The sample displays some
evidence that H/L is a better proxy for ex-ante risk than other risk surrogates. The H/L
ratio is estimated as average high price to low price for initial one month of trading has
superior predictive ability for underprice. Besides H/L, other risk proxies proves
statistically significant include investment bank prestige and inverse of offer proceeds.
Further, we studied variation in predictive behavior of risk proxies across manufacturing
and non-manufacturing sectors. We found no significant difference in average H/L value
for manufacturing and non-manufacturing firms. We also document, after market H/L,
investment bank prestige, and age of issue firm are suitable risk proxies for
manufacturing sector IPOs, while risk for non-manufacturing sector IPOs is better
represented by H/L, investment bank prestige, inverse of offer proceeds, and after
market price volatility.

Key Words: High Price to Low Price, Risk Proxy, Investment Bank Prestige, Initial Day
Return.

102
ICADABAI 2009 – Abstracts

Efficiency of Microfinance Institutions in India

Debdatta Pal
Indian Institute of Management, Ahmedabad

Email: debdatta@iimahd.ernet.in

In this study data envelopment analysis approach to efficiency has been used on a
sample of thirty six Indian Microfinance Institutions (MFIs), taking each institution as
a Decision Making Unit. The analysis considers portfolio outstanding as on March
31, 2008 as output variable while on the input side number of personnel involved in
the organization and cost per borrower been considered as a proxy for labour and
expenditures respectively. MFIs that remain efficient under both constant returns to
scale and variable returns to scale assumption are Sanghamithra Rural Financial
Services, Spandana Sphoorty Financial Limited and Pusthikar. The study also
attempted to identify and analyze the possible determinants of efficiency of MFIs in
India and variables are grouped under four wide categories namely location,
governance, presence & outreach and financial management & performance. The
results indicate that value of total assets, level of operational self sufficiency, returns
on assets, returns on equity, age and borrower per staff of MFI are positively
correlated with all efficiency measures, while, portfolio at risk (PAR 30 days) is
positively correlated with only Technical Efficiency (TE) and Pure Technical
Efficiency (PTE). As expected debt equity ratio is negatively related with TE and
PTE. In case of location only the MFIs from southern Indian states have positive
correlation with all three measures of efficiency.

Key words: Data Envelopment Analysis, Rural Financial Services, Pure Technical
Efficiency

103
ICADABAI 2009 – Abstracts

Measuring Efficiency of Indian Rural Banks Using Data


Envelopment Analysis

Gunjan M. Sanjeev

Indus World School of Business (IWSB),

Greater Noida, Uttar Pradesh

E-mail: gunjmit@hotmail.com

Plethora of literature is available in the context of measurement of efficiency of financial


institutions using various parametric and non parametric methods. Western world have
witnessed many studies being out in this area. Emerging countries too, including India,
have shown an encouraging contribution in the recent past in this area. Though in the
Indian context many studies have been carried out to evaluate the efficiency of the
public sector, private sector and the foreign banks, no significant focus gas been given
to the Regional Rural Banks.

This study has made an exploratory attempt to measure the efficiency of the 96 Regional
Rural Bank (RRS) using a mathematical programming approach, Data Envelopment
analysis (DEA). It is found that seven RRBs emerge fully efficient out of the 96 total
studied. The mean efficiency score is 0.764. Few banks need immediate attention as
their efficiency scores are very low. A preliminary effort has been made to see i) if there
is any link between the efficiency of a RRB and its association with a respective sponsor
bank; ii) if the efficiency of the RRB has any link with the geographical location. It is
found that there are a few sponsor banks who emerge winners- all RRBs operating
under them are efficient. Also, there are a few states in India where all the RRBs are
efficient.

Keywords: Non parametric method, technical efficiency, Indian banks

104
ICADABAI 2009 – Abstracts

Ranking R&D institutions: A DEA study in the Indian


context

Santanu Roy
Institute of Management Technology
Ghaziabad

Email: sroy@imt.edu, rsan58@yahoo.co.uk

One major problem in evaluating the efficiencies of public institutions as pointed out
by many researchers is the lack of a good estimate of the production function. The
study reported in the paper adopts the methodology of data envelopment analysis
(DEA) and measures the relative efficiencies of public-funded research and
development laboratories in India (each laboratory being considered as a decision
making unit) with data drawn from 12 such laboratories functioning under the Council
of Scientific and Industrial Research (CSIR). The laboratories considered are spread
over different regions of the country and work on diverse fields of science,
engineering and technology. The input data for the study consist of the total number
of scientific personnel and the total number of technical personnel working in each
laboratory and the output data consist of the number of papers published in Indian
journals, the number of papers published in foreign journals, the number of patents
filed by these laboratories. Both the global efficiency scores and the different local
efficiency scores (with specific inputs and outputs) were evaluated and potential
improvements were ascertained. The implications of the study results have been
analyzed and discussed.

Keywords: Data envelopment analysis, efficiency scores, potential improvement,


public institutions, relative efficiency, research and development laboratory.

105
ICADABAI 2009 – Abstracts

A New Filtering Approach to Credit risk

Vivek S. Borkar
School of Technology & Computer Science,
TATA Institute of Fundamental Research,
Mumbai
Email: borkar@tifr.res.in

Mrinal K. Ghosh
Department of Mathematics,
Indian Institute of Science,
Bangalore
Email: mkg@math.iisc.ernet.in

Govindan Rangarajan
Department of Mathematics,
Indian Institute of Science,
Bangalore.
Email: rangaraj@math.iisc.ernet.in

The celebrated Merton's model for equity of a firm views equity as a long call of a
European call option on the assets of a firm. This allows one to treat the assets as a
partially observed process observed through the `observation' process of equity. This is
the standard framework for nonlinear filtering, which in particular allows us to write an
explicit expression for the likelihood ratio for underlying parameters in terms of the
nonlinear filter. As the evolution of the filter itself depends on the parameters in question,
this does not permit direct maximum likelihood estimation, but does pave way for the
`Expectation-Maximization' (EM) method for estimating parameters.

Key words: Merton's model, assets, equity, nonlinear filter, EM algorithm

106
ICADABAI 2009 – Abstracts

Volatility of Eurodollar futures and Gaussian HJM term


structure models

Vladimir Pozdnyakov1 and Balaji Raman2


Department of Statistics, University of Connecticut, CT

Email: 1 valadimir.pozdnyakov@uconn.edu, 2 balaji.raman@uconn.edu.

One of the standard tools for the theoretical analysis of fixed income securities
and their associated derivatives is the term structure model of Heath, Jarrow and
Morton. In this paper we suggest a simple criteria based on realized volatility that tells
which Gaussian HJM model is consistent with observed Eurodollar futures. We also
address the question of estimation of parameters of these models by two different
methods - method of realized volatility and method of maximum likelihood.

Keywords: Gaussian HJM; Eurodollar futures; Realized volatility; Maximum


likelihood

107
ICADABAI 2009 – Abstracts

Wavelet Based Volatility Clustering Estimation of


Foreign Exchange Rates

A.N.Sekar Iyengar,
Saha Institute of Nuclear Physics,
1/AF Bidhan Nagar, Kolkata

Email: ansekar.iyengar@saha.ac.in

We have presented a novel technique of detecting intermittencies in a financial time


series of the foreign exchange rate data of U.S.- Euro dollar( US/EUR) using a
combination of both statistical and spectral techniques. This has been possible due to
Continuous Wavelet Transform (CWT) analysis which has been popularly applied to
fluctuating data in various fields science and engineering and is also being tried out in
finance and economics. We have also been able to qualitatively identify the presence of
nonlinearity and chaos in the time series

Key words: Time-Scale analysis, Intermittency, Nonlinearity and Chaos

108
ICADABAI 2009 – Abstracts

Modelling Multivariate GARCH Models with R: The


ccgarch Package

Tomoaki Nakatani
Department of Agricultural Economics
Hokkaido University
Sapporo, Japan
and
Department of Economic Statistics
Stockholm School of Economics
Stockholm, Sweden

E-mail: naktom2@gmail.com

A preliminary version. Please do not cite without a permission from the author.

This paper contains a brief introduction to the package ccgarch that is developed for use
in the open source statistical environment R. ccgarch can estimate certain types of
multivariate GARCH models with explicit modelling of conditional correlations (the CC-
GARCH models). The package is also capable of simulating data from major types of
the CC-GARCH models with multivariate normal or Student’s t innovations. Small Monte
Carlo simulations are conducted to see how the choice of the initial values a.ects the
parameter estimates in estimation. The usefulness of the package is illustrated by .tting
a trivariate Dynamic Conditional Correlation GARCH model to stock returns data.

Keywords: Dynamic conditional correlations, multivariate generalised autoregressive


conditional heteroskedasticity, .nancial econometrics

109
ICADABAI 2009 – Abstracts

Wind energy: Models and Inference

Abhinanda Sarkar
GE John F Welch Technology Center
EPIP, Whitefield
Bangalore
Email: Abhinanda.Sarkar@ge.com

Wind turbines are sources of electrical power and convert a random source – the wind –
to electricity at a steady frequency. The wind velocity can be considered as a time series
with a marginal distribution that permits extreme winds. This mechanical energy relates
to electrical energy via a power curve that also depends on other characteristics. The
modeling and estimation challenge is to model and estimate a non-normal time series,
together with implications for aspects such as turbulence parameters and the effects of
averaging. The uncertainty in the wind can be converted to risk measures for power
generated.

Key words: Weibull distribution, autoregressive time series, power curve, turbulence
intensity, value at risk

110
ICADABAI 2009 – Abstracts

Field Data Analysis - A Driver for Business Intelligence


and Proactive Customer Oriented Approach

Prakash Subramonian, Sandeep Baliga & Amarnath Subrahmanya


Reliability Engineering Department, Honeywell Technology Solutions, Bangalore

Today, industries across the world are driven to be innovative, globalized and cost
effective due to a vigilant globalized consumer (customer). Consumers are looking for
innovative, reliable and safe products with extended warranties, sales and service, and
clauses for liabilities and penalties for product non-function. There is also the cost factor
that the consumer looks for in addition to the above stated needs. In order to cater to the
consumer’s needs, industries must be innovative and cost effective, not only in terms of
their product design but also to proactively address product reliability, and reduce
warranty claims. This paper deals with product performance, life modeling and
simulation to make business decisions. The technique of data collection and
computation through a concept called Early Indicators Product Tracking which
dynamically alerts product performance is dealt in detail. The use of Weibull analysis for
product life modeling and risk forecasting is explained. The advantage of time dependent
modeling over the traditional “Take-away” constant failure rate model is discussed. The
concept of business simulation to help plan operations and anticipate bottle necks along
with its benefits is addressed in the paper. The above techniques if applied in a
systematic way can help manage the business by taking right decisions and being
proactive in addressing customer issues

Key words: Reliability, risk forecasting, Weibull, Early indicator product tracking,
Simulation, Mean Time between Failures (MTBF).

111
ICADABAI 2009 – Abstracts

Simple Algorithms for Peak Detection in Time-Series

Girish Keshav Palshikar


Tata Research Development and Design Centre (TRDDC)
Pune

Email: gk.palshikar@tcs.com

Identifying and analyzing peaks (or spikes) in a given time-series is important in many
applications. Peaks indicate significant events such as sudden increase in price/volume,
sharp rise in demand, bursts in data traffic etc. While it is easy to visually identify peaks
in a small univariate time-series, there is a need to formalize the notion of a peak to
avoid subjectivity and to devise algorithms to automatically detect peaks in any given
time-series. The latter is important in applications such as data center monitoring where
thousands of large time-series indicating CPU/memory utilization need to be analyzed in
real-time. A data point in a time-series is a local peak if (a) it is a large and locally
maximum value within a window, which is not necessarily large nor globally maximum in
the entire time-series; and (b) it is isolated i.e., not too many points in the window have
similar values. Not all local peaks are true peaks; a local peak is a true peak if it is a
reasonably large value even in the global context. We offer different formalizations of the
notion of a peak and propose corresponding algorithms to detect peaks in the given
time-series. We experimentally compare the effectiveness of these algorithms.

Keywords: Time-series, Peak detection, Burst detection, Spike detection

112
ICADABAI 2009 – Abstracts

Using the Decision Tree approach for Segmentation


analysis – an analytical overview

Rudra Sarkar
Genpact Analytics, Genpact India
Rajarhat, Kolkata

E-mail: rudra.sarkar@genpact.com

Very frequently we want to find ‘proper’ segments within our customer base to meet
various business challenges. These segments can be aligned to various business
verticals like marketing, risk or collections. In this paper we will review one such method
of doing segmentation analysis, which is relevant for the analytics support of the
business. The ‘decision tree’ helps us do the CHI-Square Automatic Interaction Detector
(CHAID) segmentation analysis and eventually drill down to the target segments. The
tree can be produced using different software applications. One of the most popular
amongst them is the Knowledge Seeker Studio application from Angoss.

We have also talked about some basic concepts on Segmentation with relevant
examples from business. And then we have discussed how we typically deal with such a
segmentation analysis using a decision tree and eventually how do we come up with
recommendations for the business. Through proper segmentation and right targeting a
business can add a lot to the bottom-line. Moreover, segmentation analysis does not
demand a huge investment. Gathering data points, doing segmentation and coming up
with specific business implementations are the actionable.

Keywords: Segmentation, CHAID, Decision Tree, Bad Rate as dependant variable

113
ICADABAI 2009 – Abstracts

Novel Business Application - Business Analytics

Sanjay Bhargava
Bharat Petroleum Corporation Limited

Crude oil procurement is one of the most critical activities in an oil refinery. Crude oil
constitutes 95% of total refining cost. After procurement the refining of crude oil and the
product slate obtained thereby needs to be planned.

There is a conventional method of crude oil evaluation and arriving at product slate
which was being used by oil industry when the refineries complexity was less, used to
process one type of crude oil and products quality requirement was not stringent.
However with passage of time crude oils evaluation and preparing product slate posed
greater challenges as refineries complexity increased to improve value addition from
processing of crude oils, products specifications became more environment friendly,
meeting pollution control norms.

The presentation deals with transformation from simple yield based calculations to
Linear Programming (LP) based model for refineries crude procurement and production
planning. In LP model in addition to yields of crude oils and various processing units of
refinery, products demand, processing units capacity, crude oils and products price,
make or buy decisions etc. is considered. The output in addition to crude processing,
products slate and quality, also indicates the marginal value for each crude oil and
product. Scenario analysis like production of various products, external streams can be
carried out.

We at BPCL are using Process Industry Modeling System (PIMS), LP model from M/s
Aspen Tech., USA for term and spot crude procurement, yearly and monthly planning.
The quarterly planning is broken into period say 4 each of 7/8 days for immediate month
to arrive at realistic production slate considering crude arrivals schedule by using multi-
period PIMS. The subsequent months are run on fortnight / monthly basis which helps to
arrive at decision for crude oils transportation schedule.

Keywords: Linear Programming, Process Industry Modeling Systems, Supply Chain

114
ICADABAI 2009 – Abstracts

Service Quality Evaluation on Occupational Health in


Fishing Sector using Grey Relational Analysis to Likert
Scale Surveys

G.S.Beriha 1, B.Patnaik 2 & S.S.Mahapatra 3

1, 2
Department of Humanities and Social sciences
National Institute of Technology (NIT), Rourkela, Orissa
3
Department of Mechanical Engineering
National Institute of Technology (NIT), Rourkela, Orissa

Email: 1 gouri_1979@yahoo.co.in, 2 pbhaswati03@yahoo.co.in,


3 mahapatrass2003@yahoo.com

The study assesses the service quality of occupational health and safety of
fishermen. Occupational hazards are a major concern in fishing, particularly in sea
water fishing. The productivity of fishing companies is greatly affected due to
occupational health related problems. Occupational health related problems caused
to fishing personnel impacts on economy and social well being of the community in
addition to loss of economic and goodwill of companies. The aim of this study is to
assess occupational health care system prevailing in the sector and propose some
remedial measure to control such hazards in future. To this end, a specially designed
questionnaire has been prepared and distributed to respondents. A Grey relational
analysis has been adapted to the responses derived in Likert type scale for
prioritization of remedial action needed for improving quality of occupational health
care system in the sector.

Keywords: Occupational health hazards, service quality evaluation, Grey relational


analysis, Likert Scale

115
ICADABAI 2009 – Abstracts

An Empirical Study on Perception of Consumer in


Insurance Sector

Binod Kumar Singh

Alphia Institute of Business Management (ICFAI)

Ranchi, Jharkhand.

E-mail: singhbinod4@yahoo.co.in, binodks@alphia.org

Consumer behavior studies the behavior of individual or a group of people. The study of
consumer behavior provides marketers to understand and predict the future market
behavior. In this paper, role of IRDA, role of Indian banks, role of private insurance
companies, function of insurance company, various factors influencing consumer
behavior, factors influencing buying decision and model of consumer decisions making
process have been considered. Also, the types of insurance policy taken by consumer,
the total sum assured of life insurance, the total sum assured of life insurance for the
spouse, the share of public insurance in insurance sector, share of LIC in life insurance in
insurance sector and the reasons for invested in life insurance have been studied. The
survey was conducted across 334 cities/towns in all the states and union territories. A
sample of 1947 individuals has been selected by setting questionnaire. The online
response system has self-checking and its validation system vetted the quality and
veracity of the responses. Indicus Analytics then cross-checked and inputs with its
databases on investors and their habits. The majorities of the respondents were from the
top five metros and 10 major cities and had at least 30 participants. The profile of the
target respondents is typically matched. The target respondents are well educated,
familiar with English, spread over major urban centers having a higher socio-economic
and income profile and spread across a range of occupations, professions and different
age groups.

Insurance sector provides some security to the consumer for any type of mishappening.In
this sector, IRDA plays an important role and time to time gives important guide lines to
various companies. Still, LIC plays an important role and has maximum share in this
sector. Recently, banking sector has also moved towards insurance sector since they
would get better dividends than the commission they would get by entering into
partnerships with other major insurance market players. Union Bank, Federal Bank,
Allahabad Bank, Bank of India, Karnataka Bank, Indian Overseas Bank, Bank of
Maharashtra, Bank of Baroda, Punjab National Bank, and Dena Bank are planning to
enter in this sector. Among private sectors Max New York insurance company plays a vital
role. There are various factors that affect the consumer buying decision and also influence
consumer thinking when they are planning to invest in insurance scheme .Major
respondents generally prefer insurance like vehicle insurance, term cover insurance,
medical/health insurance and they also prefer sum assured of life insurance less than Rs

116
ICADABAI 2009 – Abstracts

10 lakh. Most of the respondents have shown their interest in life insurance having higher
risk coverage and also for tax saving purpose.

Key words: Consumer behavior, Buying decision and Consumer decisions making
process

117
ICADABAI 2009 – Abstracts

Two Component Customer Relationship Management


Model for Health Care Services

Hardeep Chahal
Department of Commerce
University of Jammu
Jammu

Email: chahalhardeep@rediffmail.com

Although customer relationship management (CRM) is a recent concept but, its tenets
have been around for long some time. To sustain competitive advantage it is necessary
to understand what customers require and equip employees to deliver to customers
more than they expect i.e.. customer value, while constantly refining value propositions
to ensure customer loyalty and retention. .But at the same time developing and
maintaining CRM is not an easy task. There is need to have objective mechanism to
operationalise CRM in the organization. The paper has made an maiden attempt to
conceptualise and operationalise CRM through Two Component Model (Operational
CRM ( OCRM) and Analytical CRM ( ACRM) ), particularly in healthcare sector.
Relationship between OCRM, based on three patient-staff constructs ( physicians,
nurses and supportive staff) and ACRM four constructs ( satisfaction, repatronization,
recommend and organizational performance) with service quality as an antecedent to
OCRM rather than as a moderator between two CRM components – OCRM and ACRM
was analysed using confirmatory factor analysis ( AMOS). The data for the model were
collected from three large hospitals from 306 patients who have been associated with
the hospital for atleast last five years. The validity and reliability of the varied multi-
dimensional OCRM and ACRM scales were duly assessed. Dimensions primarily caring
attitude, friendliness, helpfulness, response to queries, expertise and effective treatment
are found to be significant for OCRM from physicians, nurses and supportive staff
perspectives that can impact four ACRM dimensions - satisfaction, repatronization,
recommend and organizational performance. The paper concludes with implications
(managerial, theoretical and patient) and limitations and future research.

Key Words: Operational Customer Relationship Management, Analytical Customer


Relationship Management, Total Customer Relationship Management and Service
Quality.

118
ICADABAI 2009 – Abstracts

An Analytical study of the effect of Advertisement on the


consumers of middle size town

Uma V.P.Shrivastava
Department of Business Administration,
Hitkarini College of Engineering and Technology,
Jabalpur, Madhya Pradesh

Advertisement as understood is a medium of communication and expression for


manufacturers, traders and marketers. It is a medium of expression of image and
personality of a product or service. It is a medium of communication of product features
and utility for the consumers. It is a medium which ensures that the required information
regarding a product / service should be rightly communicated to the right target
consumers at the right time in the right manner.

In the middle size towns the impact of advertising is very strong, and more than
what it is on the residents of metros and big towns. The effect on the consumers of
advertising is very high and the consumers are very highly influenced. Studies have
proved that 37% consumers tend to buy a product for the first time because they have
liked the advertisement of the product and it has aroused a curiosity about the product /
service. Moreover, as now-a-days the advertisements show, the celebrities endorse one
or the other products. Consumers usually identify themselves with such celebrities and
thus end up using or purchasing the endorsed product. Thus, there are different reasons
why advertisements have such a huge impact; it can be imagery, self identification,
glamour or just like that feeling. But the crux is that “advertising have a strong impact on
the consumers.”

The hypothesis which was adhered to working on this study was that, “the
advertisement does have a very strong impact on the consumers of middle size towns.”
The reasons why this topic was considered was that as the secondary data support; the
consumers of middle size town mostly have a defined disposable income and are
majorly away from the world of glamour. They aspire to become “someone” out of “no-
one”. The advertisements are the most glamorous ways of selling a product / service
and these consumers thus fall for them. The core objectives which were focused consist
of (a) What type of an impact do the advertisements create on the consumers of middle
size towns?; (b) What percentage of the income do consumers of middle income group
divert as the disposable income? (c) Do the consumers make purchases out of
requirement of the product or out of fancy for advertisement?

This study was conducted in six small and semi-small towns of Madhya Pradesh
and followed the basic research tools of random sampling, customer and consumer
interviews and FGD’s. Its sample size was more than one thousand respondents
including respondents in the age group of 9 yrs to 66 yrs. The respondents included
males and females of A1, A2 and B1, B2 category. This was cautiously done because it
is normally people of this category who have the buying capacity as an influence of and

119
ICADABAI 2009 – Abstracts
reaction to advertisement. The respondents were interviewed at various locations of the
cities. Data both qualitative and quantitative was collected to analyze.

The research data was analyzed and the key findings were a list: (a) the advertisements
do make a strong impact on the consumers in various ways – they either end up buying
the product / service for themselves or they recommend it to other fellow consumers with
immense confidence proving themselves a loyal consumer to the product.(b) The middle
income group and the higher income groups of middle size town both have a defined
percentage range of their house-hold income which they are willing to use as disposable
income; but this percentage varies amongst the various age groups of consumers. (c)
The products / services are broadly categorized as impulse purchase products and non-
impulse or thought over purchase products as per their own life-style and living
requirements. The impulse purchase products are more governed by fancy but the non-
impulse is governed more by deliberate purchase due to requirement of a particular
product / services. Apart from this a lot of information and insight was received related to
the consumers, their buying behaviour and patterns, their thought process and the
reasons of their ways and manners of buying products / services. The study also helps
to an extent in understanding the reasons why some particular products / services do
better than other in any middle size town as against some others.

This paper would at length discuss the various aspects of the influence and / or
impact, advertisement makes on the consumers and their buying behaviour and
patterns; also the other aspects of the same supported with both qualitative and
quantitative data.

Keywords: Advertisement Impact; Consumer Purchase; Disposable Incomes; Impulse


Product Advertising.

120
ICADABAI 2009 – Abstracts

Empirical Framework of Bayesian Approach to Purchase


Incidence Model

Sadia Samar Ali 1, R. K. Bharadwaj 2 & A. G. Jayakumari 3

1, 2
Institute of Management Studies, Ghaziabad
3
SSR Institute of Management & Research, Silvassa

Email: 1 sadiasamarali@gmail.com, 2 rkbharadwaj1@hotmail.com,


3
drjaya0401@gmail.com

The most favorable alternative for any company is to satisfy consumers’ demand,
which has always been a key consideration of any product demand and supply
system. The nature of related decisions usually is considered to be single dimensional
retailer’s information or consumer’s purchase data. Bayesian decision methodology
provides an alternative framework to handle the problem of over-stocking and under-
stocking and is used to determine the decision strategies for best alternative selection
for efficient supply-chain management. Designing a purchase incidence model for the
same requires data of purchase either obtained from retailers or from consumers’
survey. In this paper, we propose a Bayesian Criteria Purchase-Incidence Model
(BCPIM). The proposed model can help in designing effective and efficient policy
depending on the information available from both. Further, companies can use this
analysis as a strategic decision-making tool to develop efficient and sufficient supply
chain management. Finally, an example has been illustrated to highlight the
procedural implementation of the proposed model.

Key Words: Marketing, Consumer behaviour, purchase, purchase-incidence,


Poisson distribution, Gamma distribution, Bayesian criteria model

121
ICADABAI 2009 – Abstracts

Exploring Temporal Associative Classifiers for Business


Analytics

O.P. Vyas1, Ranjana Vyas1, Vivek Ranga2, Anne Gutschmidt3


1
Pt. Ravishankar Shukla University, Raipur
2
ICFAI Business School – Ahmedabad
3
University of Rostock, Rostock (Germany)

Email : ranjanavyas@gmail.com, vivekranga @rediffmail.com,


dropvyas@gmail.com, anne.gutschmidt@uni-rostock.de

Many crucial business decisions are being taken without having systematic study
of the existing scenario and also without applying suitable data analytics. The availability
of large volume of data on customers, made possible by new information technology
tools, has created opportunities as well as challenges for businesses to leverage the
data and gain competitive advantage. On the one hand, many organizations have
realized that the knowledge in these huge databases are key for supporting the various
organizational decisions. Particularly, the knowledge about customers from these
databases is critical for the marketing function. But, much of this useful knowledge is
hidden and untapped.

In Germany one such effort for Supermarket retailing is being discussed by


academia and Industry both. This approach aims to increase the customer loyalty
through applying technologies like position tracking and Radio Frequency Identification
(RFID) scanning. The Metro Group, a large German retailer, introduced the so-called
Future Stores where several technologies, especially RFID and WLAN (Wireless Local
Area Network), enable new services for the supermarket customers. Data is retrieved
from loyalty cards combined with RFID tags on products, palettes and cases. The
Supermarket transactional data coupled with the data generated in above
Recommender System is proposed to be harnessed for exploring hidden sales-patterns
through efficient data mining techniques by suitably modifying MBA (Market Basket
Analysis) techniques.

The Associationships, synonym for MBA is now an important component of data


mining and with a series of algorithmic techniques; it can handle the different categories
of business data. Experiments report that Associative Classification systems achieve
competitive classification results with traditional classification approaches such as C4.5
and is also more effective than Association rule mining approaches. This paper
investigates the temporal data mining as it was observed that that transactional data is

122
ICADABAI 2009 – Abstracts

time-sensitive in nature. Valuable patterns cannot be discovered by traditional non-


temporal data mining approaches that treat all the data as one large segment, with no
attention paid to utilizing the time information of the transactions.

We have extensively studied the performance issues of significant Associative


Classifiers CBA, CMAR and CPAR, with and without temporal dimension. The
experimentation done for ten significant bench marking data sets was concluded to see
that TACs (Temporal Associative Classifiers) perform better in terms of classifier accuracy
as compared to their non-temporal counterparts. This assumes significance because
although the transactional data have shown definite time-sensibilities and many data
mining approaches do not include the temporal aspect into the mining process considering
that this may further slow down or complicate the process of Knowledge generation.

Key words: Market Basket Analysis, Temporal Associative Classification, CBA, CMAR,
CPAR.

123
ICADABAI 2009 – Abstracts

Application of Analytical Process Framework for


Optimization of New Product Launches in Consumer
Packaged Goods and Retail Industry

Derick Jose
MindTree Consulting, Bangalore

Ganesan Kannabiran
National Institute of Technology, Tiruchirappalli

Shriharsha Imrapur
MindTree Consulting, Bangalore

Consumer Packaged Goods (CPG)/ Retail Organizations are looking at new product
development (NPD) process as a critical component to deliver breakthroughs in the
market space. With every launch organizations spend millions of dollars in researching
new products; test marketing it and releasing it to the broader market. This paper
attempts to introduce rigorous analytical techniques and processes along with new
sources of data which would help optimize the critical decisions which are undertaken
when a new product is launched. We use an analytical framework to optimize six crucial
New Product Development (NPD) decisions which consist of a pre fabricated industry
specific data model and a 10- step process using advanced statistical techniques. We
then attempt to implement the framework using a commercial tool. Preliminary outputs
show significant insights can be leveraged by the product development and the
marketing groups to align decisions regarding product features, packaging and
messaging to their target market.

Key words: New product development- analytical framework-text mining- engagement


analytics- product configuration

124
ICADABAI 2009 – Abstracts

The Predictive Analytics Using Innovative Data Mining


Approach

Sunita Soni 1, Jyothi Pillai 2 & O.P. Vyas 3

Bhilai Institute of Technology, Durg, Chhattisgarh

SOS in Computer Science


Pt. Ravi Shankar University, Raipur, Chhattisgarh

E-Mail: 1 sunita_soni74@rediffmail.com, 2 jyothi_rpillai@rediffmail.com, 3 dropvyas@gmail.com

The availability of huge amount of information does not mean wealth of information.
Filtering the data using various mining techniques gives the essence of valuable
information called knowledgeable information. Data Mining is the exploration and
analysis of large sets, in order to discover meaningful patterns and rules. Almost every
business process today involves some form of data mining.

Most of today's structured business data is stored in relational databases. Existing data
mining algorithms (including those for classification, clustering, association analysis, and
outlier detection) work on single tables or single files. Unfortunately, information in the
world can hardly be represented by such independent tables. One of the main tasks in
data mining is supervised classification, whose goal is to induce a predictive model from
a set of training data. Multi-relational classification is a very important research area
because of the popularity of relational database. It can be widely used in many
disciplines, such as financial decision-making, medical research, and geographical
applications.

In this paper we are analyzing the performance of Multi-relational Classifiers an


innovative approach, to generate predictive model used to build business intelligence
solution, which is helpful for strategic decision-making.

Key Words: Data Mining, supervised classification, Multi-relational classifiers

125
ICADABAI 2009 – Abstracts

On Rough Approximations of Classifications,


Representation of Knowledge and Multivalued Logic

B.K.Tripathy 1, J.Ojha 2 & D.Mohanty 3

1
Department of Mathematics
School of Computing Science,
VIT University, Vellore, Tamil Nadu
Email: tripathybk@rediffmail.com

2
Khallikote College, Berhampur, Orissa
Email: hodmca@sify.com

3
Department of Mathematics
Simanta Mahavidyalaya, Jharpokharia, Orissa
Email: debadutta.mohanty@rediffmail.com

Several approaches have been introduced to deal with impreciseness in data. The
concept of fuzzy sets put forth by Zadeh (1965), is one of the earliest among them. The
other major and perhaps a better approach is the concept of rough sets due to Pawlak
(1982). Classification of universes is the core concept in defining basic rough sets.
Approximations of classifications are of great interest due to the fact that, in the process
of learning from examples, the rules are derived from classifications generated by single
decisions (Busse 1988 and Tripathy et al. 2009). These rules can be used in multivalued
logic. Four propositions were established by Busse (1988) which characterize properties
of approximations of classifications. These results are instrumental in defining types of
classifications, which are used to generate rules from information systems. In this article,
we extend these propositions to obtain necessary and sufficient type results. From these
theorems, several results are derived, in addition to the above four results. We shall
provide interpretations to each of these results and also illustrate them through
examples, to determine the kind of knowledge one can infer from the information
systems, which satisfy the conditions of these propositions.

Key words: Rough set, classification, R-definable, R-lower approximation, R-upper


approximation.

126
ICADABAI 2009 – Abstracts

SB-Robust Estimation of Parameters of Circular Normal


Distribution

Arnab Kumar Laha and Mahesh K. C

Indian Institute of Management, Ahmedabad

The Circular Normal distribution (a.k.a von Mises distribution), is the most widely used
probability model for circular data. The maximum likelihood estimators (m.l.e) of the
location parameter (µ) and the concentration parameter (κ) of the distribution are known
to be not SB-robust at F = {CN(µ,κ) : κ>0}. In this paper, we define a natural measure of
dispersion (S) and show that the directional mean is not SB-robust at F with respect to
S. Next, we show that the directional mean is SB-robust at F for the following families (1)
mixture of normal and circular normal distributions, (2) mixture of two circular normal
distributions, and (3) mixture of wrapped normal and circular normal distributions with
respect to S. Subsequently we define a γ-circular trimmed mean with trimming
proportion γ and show that it is an SB-robust estimator for µ at F with respect to S. Next,
we study the SB-robustness of the m.l.e. of the concentration parameter of circular
normal distribution and show that the m.l.e. of the concentration parameter is not SB-
robust at F with respect S. Next, we define the γ-trimmed dispersion measure (Sγ) and γ-
trimmed estimator for concentration parameter and show that this new estimator is SB-
robust at F with respect to Sγ.

Keywords: Circular Data, Circular Trimmed Mean, Mixture distributions, Robust


estimation, SB-robust estimators.

127
ICADABAI 2009 – Abstracts

Bayesian Analysis of Rank Data with Covariates

Arnab K. Laha
Indian Institute of Management, Ahmedabad

Somak Dutta
Indian Statistical Institute, Kolkata

Rank data arise quite frequently in many areas of management like marketing, finance,
organizational behaviour and psychology. Such data arises when a group of randomly
chosen respondents are asked to rank a set of k items according to their order of
preference. The resultant data is a set of permutations of {1, …, k}– one permutation for
each respondent. Analysis of rank data becomes difficult because the permutation
groups do not have rich structure like the real line or the real space and the dimension of
the data increases very rapidly with increase in the number of items to be ranked. In this
paper we propose a model that assumes the observed ranks to be a random
permutation of the true rank where each permutation has some specific probability of
appearing. We consider the general case where covariates are present and the true rank
is a function of the covariate values. A Bayesian approach is taken to estimate the model
parameters. The Gibbs Sampler and the Population Monte Carlo methods are used to
sample from the posteriors. Two real life data sets are analyzed using the model to
indicate their usefulness.

Keywords: Gibbs Sampling, Permutation Group, Population Monte Carlo, Ranking Data

128
ICADABAI 2009 – Abstracts

Selecting a Stroke Risk Model Using Parallel Genetic


Algorithm

Ritu Gupta and Siuli Mukhopadhyay*


Department of Mathematics, Indian Institute of Technology Bombay, Powai,
Mumbai
*Corresponding Author

Email: siuli@math.iitb.ac.in

Increased transcranial doppler ultrasound (TCD) velocity is an indicator of cerebral


infarction (stroke) in children with sickle cell disease (SCD). In this paper we use parallel
genetic algorithm (PGA) to select a stroke risk model with TCD velocity as the response
variable. Development of such a stroke risk model leads to the identification of children
with SCD who are at a higher risk of stroke and treating them in the early stages. Using
blood velocity data from SCD patients we show that PGA is an easy-to-use-
computationally variable selection tool. Model selection results obtained from using PGA
shows that PGA performs well when compared with stepwise selection and best subset
selection techniques.

Keywords: Transcranial doppler ultrasound velocity; sickle cell disease; stepwise


selection; model validation

129
ICADABAI 2009 – Abstracts

Linking Psychological Empowerment to Work-Outcomes

Anita Sarkar & Manjari Singh


Indian Institute of Management, Ahmedabad

Based on the concept put forth by researchers (Conger & Kanungo, 1988;
Thomas & Velthouse, 1990; Spreitzer, 1995, 1996) we studied the concept of
empowerment in the context of women primary school teachers of India. Relationship of
two important work-outcomes of empowerment, job involvement and innovative behavior
are studied. While individual’s empowerment is based on self report, the data for work-
outcomes have been collected from self, superiors and colleagues. Total 113 teachers, 8
superiors and 303 colleagues from three schools of Gujarat, India have participated in
the study.

All the latent constructs under study were tested for both convergent and
discriminant validity. We performed both single rater and multi rater confirmatory factor
analysis, as appropriate for multi-rater research. Before aggregating the data for
colleagues, rwg, average deviation, and intra-class correlation were calculated. Structural
equation modeling has been used to test the model fit. Results show that empowerment
lead to job involvement and innovative behavior.

The study supports earlier rating researchers’ perspective that different types of
raters have different perspectives for the same dimension and that influence their ratings
(Landy & Farr, 1980; Harris, & Schaubroeck, 1988). Overall the research indicates
importance of psychological empowerment in the workplace.

Key words: Empowerment, multi-rater, job involvement, innovative behavior, teacher

130
ICADABAI 2009 – Abstracts

To Identify the Employability Skills for Managers


through the Content Analysis of the Selected Job
Advertisements

Mandeep Dhillon
ICFAI National College, Chandigarh

Email: dhillonmandeep.inc@gmail.com

The purpose of this paper was to outline the basic employability skills and or
competencies required by employees in business organizations to perform and compete.
For the purpose of the present study employability skills were, “a set of attributes, skills
and knowledge that all labor market participants should possess to ensure that they
have the capability of being effective in the workplace-to the benefit of themselves, their
employer and the wider economy”.

A content analysis was conducted of selected job advertisements during August to


December 2008. Three Hundred and Ten advertisements in print (Newspapers) and
online (Naukri, Monster) were selected for the study. Two judges independently coded
and assessed the coding of the advertisements.

This study has identified that the employability skills can be categorized as Basic
Academic skills, Technical skills and Generic skills. Generic skills were further grouped
as Operational, Behavioral skills and other soft skills. Communication skills, analytic skill
and leadership skills were found to be most important generic skills.

Keywords: Generic Skills, Operational Skills, Behavioral Skills, Textual Analysis

131
ICADABAI 2009 – Abstracts

Performance Measurement in Relief Chain: An Indian


Perspective

A.S Narag, Amit Bardhan and Hamendra Dangi

Faculty of Management Studies, University of Delhi, Delhi

Email: andynarag@fms.edu, amit-bardhan@fms.edu, hkdangi@fms.edu

When disasters strike, government agencies, military, paramilitary forces, and relief
organizations responds by delivering aid to those in need. The distribution chain needs
to be both fast and agile in responding sudden onset of disasters. Given the stake and
size of relief industry, the study of relief chain is an important domain of supply chain
management. The distribution chain needs to be both fast and agile in responding
sudden onset of disasters.

Performance measurement is critical to relief operations accountability. The ultimate


goal of performance measurement in supply chain systems is to establish relationships
between decision variables and performance outputs, leading to the creation and
maintenance of high-performance systems. A performance measure (or performance
measurement system), describes the effectiveness and/or efficiency of a system. The
aim of this paper was to define, compare and contrast the commercial supply chain and
the relief chain, discuss an approach to performance measurement in the domain of
humanitarian relief, and identify the challenges faced by relief chain logisticians in
practice and research.

Key Words: Supply Chain, Disaster Management, Performance Evaluation System

132
ICADABAI 2009 – Abstracts

Machine Learning Approach For Predicting Quality Of


Cotton Using Support Vector Machine

Selvanayaki M
PSGR Krishnammal College for Women
Coimbatore
E-mail: selvanayaki79@gmail.com

Vijaya MS
GR Govindarajulu School of Applied Computer Technology
PSGR Krishnammal College for Women
Coimbatore
E-mail: msvijaya@grgsat.com

Yarn strength depends extremely on the quality of cotton. The physical


characteristics such as fiber length, length distribution, trash value, color grade, strength,
shape, tenacity, density, moisture absorption, dimensional stability, resistance, thermal
reaction, count, etc., contributes to the quality of cotton. The quality of cotton mainly
depends on the major factor such as the fibre length, strength, maturity, fineness,
tenacity, color, uniformity ratio, lint etc.. Three important issues considered during cotton
data collection are unlabeled samples, imbalance of the cotton samples due to high
proportion and cotton samples of different quality. So, the prediction of cotton quality is a
challenging and important task in manufacturing the quality yarn. Hence there is a need
to generate an efficient classification model for predicting the quality of cotton with high
predictive accuracy. This paper presents an implementation of a machine learning
algorithm, Support Vector Machine for cotton quality prediction. Support vector machine
is a training algorithm for learning classification and regression rules from data. SVM is a
supervised pattern classification technique suitable for working accurately and efficiently
with high dimensionality feature spaces. SVM has an extremely well developed
statistical learning theory. SVM is based on strong mathematical foundations and results
in simple yet very powerful algorithm. In this paper the support vector machine is trained
using the data collected from a spinning unit.

The dataset is trained using SVM with linear, polynomial and RBF kernel and
with different parameter settings for d, gamma and C - regularization parameter. The
performance of the trained model is evaluated using 10 – fold cross validation for its
predictive accuracy. Prediction accuracy is measured as the ratio of number of correctly
classified instances in the test dataset and the total number of instances. Prediction
accuracy is measured as the ratio of number of correctly classified instances in the test
dataset and the total number of instances. It is found that the predictive accuracy shown
by SVM with Radial Basis Function kernel is higher than the other two models.

Keywords: Classification, Prediction, Support Vector Machine, Machine learning.

133
ICADABAI 2009 – Abstracts

Machine Learning Techniques: Approach for Mapping of


MHC Class Binding Nonamers

Gomase V.S.*, Yash Parekh, Subin Koshy, Siddhesh Lakhan and Archana Khade
*
Department of Bioinformatics, Padmashree Dr. D.Y. Patil University,
Navi Mumbai

Email- virusgene1@yahoo.co.in

The machine learning techniques are playing a major role in the field of
immunoinformatics for DNA-binding domain analysis. Functional analysis of the binding
ability of DNA-binding domain protein antigen peptides to major histocompatibility
complex (MHC) class molecules is important in vaccine development. The variable
length of each binding peptide complicates this prediction. Such predictions can be used
to select epitopes for use in rational vaccine design and to increase the understanding of
roles of the immune system in infectious diseases. Antigenic epitopes of DNA-binding
domain protein form Human papilloma virus-31 are important determinant for protection
of many host form viral infection. This study shows active part in host immune reactions
and involvement of MHC class-I and MHC II in response to almost all antigens. We used
PSSM and SVM algorithms for antigen design, which represented predicted binders as
MHCII-IAb, MHCII-IAd, MHCII-IAg7, and MHCII- RT1.B nonamers from viral DNA-
binding domain crystal structure. These peptide nonamers are from a set of aligned
peptides known to bind to a given MHC molecule as the predictor of MHC-peptide
binding. Analysis shows potential drug targets to identify active sites against diseases.

Keywords: DNA-binding domain crystal structure, Position Specific Scoring Matrices


(PSSM), Support Vector Machine (SVM), major histocompatibility complex (MHC),
epitope, peptide vaccine

134
ICADABAI 2009 – Abstracts

The Click Click Agreements –The Legal Perspectives

Rashmi Kumar Agrawal 1, Sanjeev Prashar 2

Institute of Management Technology, Ghaziabad


Email: 1 arashmi@imt.edu, 2 sprashar@imt.edu

The world post WTO -1995 has opened new dimensions of trade and commerce
both nationally and internationally. This got further facilitated by the advent of
technology. This research paper attempts to understand the legal implications of
different types of contracts envisaged through internet and the statutory
provisions pertaining to the same. The main contentions for E Contracts are the
legal sanctity of E commerce per se and electronic governance as to writing and
signature for legal recognition. The first step in this direction was the enactment
of The Information Technology Act, 1999 which provided for equal legal
treatment to users of electronic communication as the paper base
communication followed by amendments in the Indian Penal Code, 1890, The
Indian Evidence Act, 1872, the Reserve Bank of India Act, 1934 and the Bankers
Books Evidence Act, 1891. The three basic forms of "E-Contracts"; the Click
Wrap, the Shrink-Wrap agreements and the Electronic Data Interchange, and
their legal sanctity have been analysed by content analysis of Information
Technology Act, 2002 and Indian Contract Act, 1872 supported by relevant
judicial pronouncements both from India and developed countries.

Keywords: E-Contracts, Click- Wrap, Shrink- Wrap, Electronic Data Interchange


agreements.

135
ICADABAI 2009 – Abstracts

136

You might also like