Professional Documents
Culture Documents
ABSTRACT:
The lack of labeled data presents a common challenge in many computer
vision and machine learning tasks. Semisupervised learning and transfer
learning methods have been developed to tackle this challenge by
utilizing auxiliary samples from the same domain or from a different
domain, respectively. Self-taught learning, which is a special type of
transfer learning, has fewer restrictions on the choice of auxiliary data. It
has shown promising performance in visual learning. However, existing
selftaught learning methods usually ignore the structure information in
data. In this paper, we focus on building a self-taught coding framework,
which can effectively utilize the rich low-level pattern information
abstracted from the auxiliary domain, in order to characterize the high-
level structural information in the target domain. By leveraging a high
quality dictionary learned across auxiliary and target domains, the
proposed approach learns expressive codings for the samples in the
target domain. Since many types of visual data have been proven to
contain subspace structures, a low-rank constraint is introduced into the
coding objective to better characterize the structure of the given target
set. The proposed representation learning framework is called self-taught
low-rank (S-Low) coding, which can be formulated as a nonconvex
rank-minimization and dictionary learning problem. We devise an
efficient majorization–minimization augmented Lagrange multiplier
algorithm to solve it. Based on the proposed S-Low coding mechanism,
both unsupervised and supervised visual learning algorithms are derived.
Extensive experiments on five benchmark data sets demonstrate the
effectiveness of our approach.
ARCHITECTURE:
EXISTING SYSTEM:
DISADVANTAGE:
PROPOSED SYSTEM:
In many real-world visual learning tasks, the assumption of sufficient
training data may not always hold. Thus, involving additional data
resources to overcome the shortage of training data becomes an
important problem. Most representative solutions include
semisupervised learning and transfer learning. The former solution
addresses this problem by using a large amount of unlabeled data
from the same domain with the same distribution to build better
classifiers, while the latter one tries to leverage labeled data from
related homogenous tasks. However, neither unlabeled data with the
same distribution nor labeled data from homogenous tasks are easy to
get. Recently, there has been a surge of interest in the topic of STL,
by involving unlabeled data without the above restrictions. Raina et
al. first proposed the concept of STL by applying sparse coding
mechanism to construct a higher level representation from the
unlabeled data. Lee et al. extended Raina’s work by presenting a
generalization of sparse coding module, which could be suited to
model other data types drawn from any exponential family
distribution. From the application point of view, Dai et al. proposed a
clustering algorithm in the spirit of STL by allowing the feature
representation from the auxiliary data to influence the target data
through a common set of features. Kuen et alemployed the core idea
of STL, and transferred stacked auto encoders for visual tracking.
ADVANTAGES:
They only focus on a single domain, while our approach seeks help
from the auxiliary domain and
Existing work like [30] and [42] learns a dictionary only from the
target domain, and all other existing low-rank methods do not learn
dictionaries. However, our approach learns a dictionary from both
auxiliary and target domains in the STL setting.
MODULES:
3. Clustering Results
The results from the above module are handled by some math
functions to put those values into calculations. Get the total marks
accomplished by students and average of the photos can be calculated by
the auto functionalities and display to users.
4. Data Sets and Settings
ALGORITHM
When label information is available in the target domain, we design a
classification algorithm based on our S-Low coding approach to train a
classifier. Then, with the help of the learned dictionary D, our algorithm
could classify new test samples. As discussed in Section III-A, low-rank
codings ZT can be considered as new representations of the target
sample set XT . Given a test sample y, we can calculate the
representation coefficients of y ∈ Rd×1 by solving
min a y − Da2 2 + λa1
Input: data matrix X = [XS XT], class labels of XT, test sample y
REQUIREMENT ANALYSIS
REQUIREMENT SPECIFICATION
Functional Requirements
1. Python
2. Django
3. MySql
4. MySqlclient
5. WampServer 2.4
6. Datasets(Image)
1. Windows 7
2. Windows XP
3. Windows 8
1. Python
CONCLUSION