You are on page 1of 7

Why Sit: Passive Sensor-based Activity Recognition of Contextual

Sitting Actions
Maulin Hemani, Sangrin Lee and Andrew McConnell

Abstract There is a significant association between time Additionally, the risk primarily increases for older adults as
spent on sedentary activities and the risk of chronic disease. they spend over 20% more time sitting each day and are
As a population, older adults are more likely to develop such more prone to develop chronic diseases.
chronic diseases; on a related note, they spend on average
over twenty percent more time per day sitting than younger To combat this, we aim to develop a model to detect both
adults. Given their detrimental effects on personal health, when one is sitting as well as the specific sitting context
sitting activities including watching TV, reading a book, and using passive sensor data. By identifying both of these, we
working at a desk pose important targets for intervention to hope to be able to break up this sedentary time based on the
break up the time spent on sedentary actions. Decreasing time context once it is detected that our subject is sitting. Splitting
spent sitting in older adults presents an opportunity for a great
advancement in personal health. up the sitting time by encouraging simple physical activity
With the overarching goal of breaking up sitting times, we as a break will ideally decrease the health risks people who
present in this paper a system to detect specific sitting activities. sit constantly are exposed to.
Using a combination of inertial and sound sensors, we have
developed a decision-tree based model which can differentiate II. RELATED WORKS
sedentary contexts for the purpose of identifying which are best
targets for interventional strategies. This system will be used In developing our system, we analyzed some previous re-
in real-time to understand why people are sitting, if they could search that tackled relevant issues. One similar study, which
and would be willing to take a break from sitting, and what looked to identify sedentary actions related to ours, used
they would do if they were to break their sitting time. Such their specially-designed K-Sense monitoring system which
an interventional, sensor-based mobile health device could have is based on inertial measurements units (IMUs), consisting
large ramifications in the healthcare industry.
To this extent, our system has been developed for use on the of accelerometer, gyroscope, and magnetometer. The system
Asus ZenWatch, which runs on the Android Wear operating was attached to the waist, wrist, and ankle. Using signal
system. We trialled our system on ten people to gather data with processing, it also checked the movements of various body
which to build the model. By monitoring the onboard sensors, parts by measuring kinetic motions. The result indicates that
our system has been able to differentiate among standing, inertial-based activity systems can be used to identify various
sitting, and lying down activities, and further contextualize
them into specific categories, with an accuracy of over ninety- nuanced activities accurately. In our study, we used a smart
seven percent. watch which consists of a 3-axis accelerometer, gyroscope,
gravity, and magnetometer to get the sensor data. Based on
I. INTRODUCTION the data, our study aimed to identify peoples movement and
Our rapidly evolving world presents many changes and provide users with more useful information [14].
challenges to people as they attempt to adjust. These changes Another study investigated the use of commercially avail-
are occurring in all aspects of our lives, including our able smart watches for use in activity detection, and found
lifestyles and personal health. One noticeable trend is the them to be more useful than similar smart phones for
ever decreasing need for constant physical activity through- detection of specialized hand-based activities (the paper
out the day. With new types of technology and jobs, our specifically notes the advantage in eating different kinds of
generation has been able to notably reduce the amount of food or drinking) [12].
physical activity in our daily lives. The other pertinent trend Yet another study focused on the association of type-
to this paper is the major increase in sitting time we see in specific and total time spent sitting with Framingham Score.
peoples daily lives nowadays as more of the activities in Framingham score was calculated using information on their
our lives now revolve around sitting. A simple illustration physical information such as age, blood pressure, and choles-
of this can be seen by imagining a typical day: driving to terol levels. Based on the score, sedentary time and context
destinations, working at a desk, eating, watching TV, and of subjects were reported by wearing Hookie accelerometers
browsing the internet on a laptop all involve sitting down at their waist. Sedentary time in sitting context was rec-
for example. Whether we realize it or not, sedentary time ognized from raw acceleration data based on low intensity
is surely taking up a larger role in our lives. Unfortunately, of movement using mean amplitude deviation and device
the overwhelming belief is that this increasing sitting time orientation in relation to identified upright position defined at
has related health risks and concerns we wish to study more. the end of each epoch. Based on this information, their sitting
Papers by Owen and Wennman showed that it may lead to contexts were identified. Using statistical method, the result
higher chances of chronic diseases such as cardiovascular shows that only sitting with watching television is related
diseases as well as higher risk of premature mortality [8][13]. to Framingham score. This finding led our study to identify
more specific relations between sitting activities and sitting Further advantages of the ZenWatch come from its use
time [13]. of Invensense sensors, which have been developed to in-
Another conducted study indicated that, by using a cluster clude virtual sensors that manipulate raw data to create
heat map, which is a graphical representation from ac- more contextual information, such as orientation and linear
celerometer data for each subjects, it is possible to assess acceleration (see Table 1). This essentially produces some
subjects sedentary time and activities. Based on the values automatic feature generation capability [6].
taken by the accelerometer counts, minutes since walking and Additionally, the watch has Bluetooth capabilities, which
activity intensity (counts/min) were represented as color in allow for pairing with a nearby Android device. This was
the two dimensional map. This map identified that the subject initially utilized for long-term data storage on a remote
was walking or sitting. This finding found that sitting time server, but this characteristic was later removed due to data
is defined as accelerometer counts below 100 per minute, loss and slow transmission on massive data samples.
and is related to the detrimental health status. The paper For testing, we developed an Android Wear app to allow
claims that the finding also includes activity transition from for sensor data to be collected and marked with the correct
sedentary status to non-sedentary status, and vice-versa [8]. activity being performed, which was used for initial model
One more study used Ecological Momentary Intervention validation.
(EMI), which provides a framework for treatment and detects TABLE I
eating habit, weight change, and other physical activity. S YSTEM S ENSORS [7]
By using a mobile phone, people can provide information
and receive real-time assessment based on EMI delivery, Sensors Description Type
frequency, and duration. Compared to EMI, our study more Accelerometer Acceleration of the de- Raw
vice along the 3 sensor
focuses on sensor based device. Our study uses sensor axes
based device which has more advantage in detecting exact Gravity Direction and magnitude Fusion
movement. Therefore, by analyzing these information, we of gravity in the devices
coordinates
believe we can make our system more valid and provide Gyroscope Rate of rotation of the Raw
people with more exact information [3]. device around the 3 sen-
In contrast with past studies, our system aims to use a sor axes
Magnetometer Magnitude and direction Raw
combination of smart watch-based sensors only, in order to of Earths magnetic flux
minimize the overall impact on ones normal life when using Linear Linear acceleration of Feature
the system. By containing all the necessary hardware on a Accelerometer the device in the sen-
sor frame, gravity vector
commercial watch, we encourage users to adhere to using the subtracted
system. To combat the loss of sensor information from using Rotation Vec- Orientation of the Fused
only one device, we also implemented software which allows tor device relative to
the watchs built-in microphone to act as a sensor for sound the East-North-Up
coordinates frame
amplitude, which adds another dimension to recorded data Step Detector Generates an event each Feature
for an activity sample. We believe this additional information time a step is taken by
will help differentiate the more nuanced activities by helping the user
Microphone Sound amplitude Feature
to identify the users environment (i.e. if the television is on). Orientation Pitch, roll and yaw based Fusion
Also, rather than calculating features in the frequency do- on inertial sensors
main, we analyzed the instantaneous rate of change between
two consecutive data samples to provide additional context
for the model. We believe that our systems conformation
A. Back End System
will prove to be a useful advancement in the field of Activity
Recognition (AR). Our back end system is primarily comprised of fore-
ground job and background job. When the app starts on the
III. SYSTEM foreground, it first checks local permissions (read/write to
external storage, record audio, modify audio settings, etc.).
Our primary source of data acquisition was through the If these are not permitted, the app requests all for the device
Asus ZenWatch (WI500Q). This specific device was chosen to run. Permissions are requested in the Android manifest,
due to the variety of sensors available in its hardware. This but if they are not allowed due to the Android API level, we
includes a three-axis accelerometer, a three-axis gyroscope, developed code to request permissions when the app starts.
a three-axis magnetometer, and a heart rate monitor, as well At the same time, all the sensor objects (Accelerometer, Gy-
as a built-in microphone, which we have developed code roscope, Gravity, etc.) and audio system object are initialized.
to convert into a sound amplitude sensor. The heart rate Once a user selects an activity, all sensors are registered and
monitor was not used, as it requires the user to place two the audio object starts to calculate sound amplitude data.
fingers on the watch screen (which reduces the passive nature When this begins, the sensors record samples at a rate of
of detection), and has been reported to have below-average 5Hz (which was chosen with consideration towards the cost
accuracy [9]. of sound amplitude calculation), and the data are stored in a
built-in SQLite database on the local SD card for the duration
of testing. When the stop button is pressed, each sensor is
unregistered and sensor events stop. After the user completes
all the activities and presses the replicate button, the app
converts all the entries in the SQLite database into CSV
files stored on the watch. Once it is done, all the entries in
the database are deleted. These two converting and deleting
processes run in the background, so the user can continue
to perform other activities while converting entries into CSV
files.
B. System Communication
All the interactive items in the front end are saved in
resources with their ids, and each method in the back end can
find interactive items with their ids. Once they are connected,
each method in the back end can change their properties
at any time. The initial design of the app had the SQLite
data sent to a remote server via a Bluetooth pairing and
internet connection, but this was problematic due to slow Fig. 1. ZenWatch App Interface
transmission rate, so we redesigned it to store the entire test
data set on the watch until it could be pulled later. This
eliminates the need for a local WiFi connection, or even a A. Preprocessing
Bluetooth pairing with a smart phone; this quality is useful Data were segmented by the sliding window method into
for the overall study, as further iterations of the app can now windows of 2 second length with 1 second overlap, as
be set up to recognize activity on-the-fly, without dependence previous studies have shown this to be a good window length
on other devices. Instead, the watch is later connected to on average [4]. The data were also smoothed and then feature
a computer via USB and the CSV files can be extracted extraction commenced.
from the ZenWatchs internal storage on Android Debug The focus of our feature extraction was done in the time
Bridge (adb) shell environment. Later iterations of this app domain. This was with the intention to maximize perfor-
could even remove this component, so that the data can be mance speed, so that in the future the model may be applied
processed in real-time on the watch, removing the need to to real-time scenarios. Because we believe the sensor data
transfer data to any other device. will not be particularly repetitive when compared to less-
sedentary activities such as walking or running, we do not
C. App Design believe there would have been much net benefit outside of
The user interface is primarily comprised of four textview the time domain.
widgets to display the status and sixteen button widgets to We calculated a variety of features based on the sensor
perform an action. The four textviews display the users data in order to assist in specific activity recognition. As one
unique identification and which context the user is doing. of our detection focuses, statistical metrics were collected
The buttons provide options to change the user id, stop the across the sensors to assist in determining subject position
context, write all the database entries into CSV files, display (sitting, standing, and lying down). These features include
the total number of entries in the local SQLite database, and mean and variance. The goal of this portion is to see if
select the context which the user will perform (to establish the user is sitting, before more complex sitting activities are
a ground truth). determined. Step detection turned out to also be a useful
IV. METHODS metric here if a step was detected, the user was not likely
to sit (depending on reliability of the sensor).
As the ZenWatch collected test data, it was stored locally
Envelope metrics were calculated as well, particularly for
in the watch for the duration, until a later time when it
sensor data. Such features include max, min, and range. We
could be processed. Each data sample was stored in the
believed these would especially be useful with respect to
local SQLite database until replication, when the database
sound amplitude for determining the environment. This was
is converted into a CSV file. On transfer to a computer,
expected to particularly factor into detecting activities such
each sample was split and placed into one of two CSV files;
as talking, phone use, and watching television, as well as
one file containing the output sensor data (with sensors in
differentiating between the silent and talking activities (i.e.
columns, and each row corresponding to a single point in
standing versus standing while talking). A zero-crossings
time), and one file containing the user id, the label and time
feature was also used for similar purposes, calculated per
point for the corresponding sensor data.
window by
Following data collection, the data were preprocessed
ZCw = i=n f (Si , Si+1 )
through PASDAC prior to classification and evaluation.
Specifics are described below. where S is the data point at time i, and f is a function such
that   V. EXPERIMENTAL SETUP
1, if x y 0 The initial round of user testing, which was conducted to
f (x, y) =
0, if x y > 0 build our model, consisted of ten subjects, with testing done
A mean-crossings feature was also calculated for each data in their home environments. The decision to test in users
column, counting the number of times the signal crosses the homes was made to ensure that the resulting data represents
mean rather than the zero axis. the most realistic actions the user would make, given their
personal situations.
Values were also calculated which measure the instanta-
neous change in sensor data. Each point was calculated by
Vi Vi1
Vi =
2
Each of these rates also has the above statistical features
calculated.

B. Classification
We used machine learning algorithms to train our activity
classifier, utilizing 10-fold validation. Though we evaluated
the results of a few algorithms, including Nave Bayes, C4.5
Decision Tree, K-Nearest Neighbors, and Random Forest, we
present our model based on a C4.5 Decision Tree, similar to Fig. 3. Example setup of User Testing, showing the Writing/Desk Work
previous work [14], due to the easy interpretability. A small (sitting) activity
portion of our tree can be seen in Figure 2.
Each user wore one Asus ZenWatch 1 on their left wrist.
The left wrist was chosen based on related works [2], which
found the non-dominant wrist to be better used for AR tasks.
For consistency, the left wrist was chosen for everyone, as
the right hand is more commonly the dominant one. In
their home environments, the users performed each of the
sitting activities we aimed to differentiate for five minutes
each. They also performed comparative standing activities
for differentiation, to generate a total of one hour of data
per test subject. The resulting datasets were used to build
the model and for initial validation.
A second round of user testing may be conducted for
further model validation. This would be done by having
test subjects wear the ZenWatch outside of the lab as they
perform their daily routine activities. During this period,
whenever the AR model detects that the user is both sitting
and has begun one of our target activities, the system will
send a message to the user, asking for confirmation of the
detected activity. At the same time, it may also ask if the
user would be willing to take a break from sitting during this
activity. It will not send the follow-up question every time, so
Fig. 2. A visualization of a part of our decision tree [10]
that the user does not become disinclined to continue using
the system. The reasoning behind this secondary validation
testing is to show the AR models usefulness in the field.
C. Evaluation
VI. RESULTS
We evaluated our system based on the choice of features
(based on information gain), the choice of machine learning Based on the initial round of user testing, our results
algorithm (based on accuracy and/or F-measure), and most show the following with regards to the evaluation criteria
importantly, the effectiveness of our system. The final eval- outlined in Section IV-C. Our most useful features, as shown
uation was based on a confusion matrix, total accuracy, true in Table II, were found to focus heavily on those that look
positive rate, false positive rate, and F-measure. The results at change in sensor data over time. These were calculated
of this evaluation setup are presented in Section VI, and in Weka, which is machine learning software for knowledge
evaluation formulas are summarized in Appendix 1. analysis, using 10-fold validation, with the reported values
TABLE II
C ONFUSION M ATRIX

Activity 1 2 3 4 5 6 7 8 9 10 11 12
Television 1 970 2 1 0 11 5 0 0 1 0 0 0
Computer Use 2 0 944 6 4 3 3 0 0 0 4 1 0
Reading 3 2 0 692 0 3 2 0 1 0 0 1 0
Writing/Desk 4 2 2 0 628 1 3 0 1 2 1 0 1
Phone Use 5 13 4 2 0 973 4 7 0 0 0 2 0
Talking 6 3 0 0 0 4 989 5 1 0 2 2 0
Other 7 1 0 0 0 2 6 929 0 0 2 0 0
Standing 8 1 0 0 2 0 1 1 921 2 0 6 2
Walking 9 0 0 0 2 0 2 0 2 922 0 2 28
Lying Down 10 1 1 0 0 1 5 1 1 2 742 0 0
Standing (Talking) 11 0 0 1 1 1 3 0 6 0 0 1020 0
Walking (Talking) 12 0 0 0 1 0 0 0 1 41 0 1 988

Fig. 5. Chart comparing Maximum Change in Sound Amplitude (on the


Y axis) vs. the twelve activities (on the X axis)

TABLE IV
Fig. 4. Example setup of User Testing, showing the Lying Down
comparison activity C LASSIFIER ACCURACY

TABLE III Classifier Accuracy


Naive Bayes 48.8321%
F EATURE I NFORMATION G AIN C4.5 Decision Tree 97.7920%
Bayes Net 98.8595%
Feature Average Merit K-Nearest Neighbors 99.4252%
Sound-Max 3.055 0.04 Random Forest 99.6898%
Accelz -Min 2.98 0.13
M agx -Max 2.926 0.026
M agx -Min 2.935 0.02
Orientyaw -Min 2.936 0.1
M agz -Min 2.936 0.1 sulting confusion matrix, with the true activities in rows and
Gyrox -Min 2.942 0.136 our classifiers labels in columns. From this, we determined
Gyroy -Min 2.949 0.132
Lin.Accel-Max 2.941 0.152 our classifier to have (in weighted average across activities)
Accelx -Min 2.85 0.035 a recall of 97.8%, a false positive rate of 0.2%, a Receiver
Operating Characteristic (ROC) area of 99.1%, and an F-
measure of 97.8%.
being their average information gains across folds, sorted by
VII. DISCUSSION
their average rankings. In particular, the maximum change
in sound amplitude was found to be the most informative Our results show that our system shows great promise for
feature (see Fig. 5), with the minimum change in acceleration activity recognition of sitting contexts, particularly using the
in the Z direction as a close second. Decision Tree algorithm. With respect to machine learning
Though we intend to use a C4.5 Decision Tree as our algorithm choice, most of the options proved fairly useful,
systems classifier due to its intuitive nature, we analyzed our though Decision Tree remains the standout for balancing
datasets in Weka using a few other algorithms, as displayed understandability with comparable accuracy. Of note is the
in Table III. As expected, the Decision Tree proved to be very low accuracy results of running Nave Bayes on the data; in
accurate, yielding 97.792% correctly classified instances. the resulting confusion matrix from Weka, we found that the
K-Nearest Neighbors and Random Forest algorithms also algorithm was still successful in differentiating sitting from
proved to be very useful. In comparison, the Nave Bayes standing activities, but the more nuanced contexts provided
classifier resulted in a fairly low accuracy of 48.832%. difficulty (with lying down often being mistaken for a sitting
Using our Decision Tree model, Table II shows our re- activity).
User testing was conducted over the course of one week, round of validation, which would have occurred in the field.
with each trial being conducted in the users own home for Though this step of the study may take one or more months,
realistic personalization of the resulting data. In our first we would have preferred to be able to set up the testing
round of testing, which utilized the remote Northwestern methodology and run the first few people through testing.
server for data storage and processing, we noticed some
inconsistencies in the rate of data collection. While the VIII. FUTURE
resulting samples still seemed to be accurate, we decided The long term goals of this project are to:
not to include that subjects results in the overall model, to identify frequently occurring sedentary behaviors in
be safe. older adults, and
We included an Other category in which users performed understand intention and motivation for sitting and
any sitting activity not already categorized that they may breaking up time spent sitting
normally perform. The activity chosen was most commonly This activity recognition system provides the means to
playing a video game or eating. Surprisingly, the classifier accomplish these tasks. Recall that the intention of this study
was pretty accurate in its classification of the other category. is to break up sedentary activity, when reasonable. Moving
This could be because, despite the inconsistency in activity, forward, the next steps are to identify which sitting contexts
the activities chosen tended to be much more active than are the best targets for intervention, in order to maximize
other sitting activities we analyzed, which would result in usefulness of the system. This process will begin with the
different sensor data. second round of user testing mentioned above, which will
After building the model, we found that our most informa- demonstrate the systems effectiveness in the field. Detecting
tive features tended to be various statistical representations a new sitting activity will (possibly) trigger a notification
of the instantaneous changes of sensor data (for example, from the system, asking if the user would be willing to break
maximum instantaneous change in sound amplitude). This their sitting time, and what they would do if they did. This
may indicate that features of other domains, particularly event should not occur at every change in sitting context, lest
frequency, may have helped the model further. While other subjects become disinclined to continue using the system.
feature domains may help to improve accuracy, they were Instead, it should occur with some low probability, with
disregarded due to their complex nature, which could inter- in-the-field testing conducted over many hours to collect
fere with the real-time analysis that this system is intended enough data. Combined with the activity detection model,
for. this information can be used to detect a sitting bout and
We feel confident in our systems contribution to the push an ecological momentary assessment type prompt to
field of mobile health, particularly given its use of the the user.
microphone sensor and other uncommon inertial sensors. In Further developments necessary for this system with re-
particular, the gyroscope and microphone, as well as the spect to the overarching study include:
linear acceleration virtual sensor, provided the four most redesigning the systems data collection so that the
informative features (see Table III). Other studies which sensors may register data in the background
attempted to contextualize similar studies focused on pairing implementing activity recognition in real-time within
multiple accelerometers worn on different parts of the body. the smart watch, with samples being deleted over time
In contrast, our system only uses a single smartwatch, which to save space (though the watch was able to hold over
adds a passive capacity to its use. Also, as can be seen in 30,000 samples during testing, which equates out to
our confusion matrix (Table II), while our system was very roughly two user trials, or around 3 hours)
successful in its ability to contextualize different sitting ac- optimizing battery life, so that the system may be used
tivities, it was even more capable of differentiating between passively on a watch over long periods of time (for
sitting and standing activities. The most common mistakes instance, over a day)
were classifying Walking and Talking as just Walking (41 implementing a messaging system in-app for field-
instances) or vice versa (28 instances, and Phone Use as validation and identification of optimal sitting activities
Television (13 instances) or vice versa (11 instances). of intervention
One area which could be improved upon is the optimiza-
The first steps towards redesign of the app have been
tion of the current version of the app. For the best data,
taken already. Though it is not yet fully integrated with
our app was designed to aggressively retain control of the
our Decision Tree, the new app is capable of differentiating
watchs CPU for the duration of testing. This was done by
between Computer Use, Standing, and Walking activities
both implementing a WakeLock in the code to prevent the
over ten second windows in real-time.
app from moving into the background, as well as changing
a display variable every few seconds to prevent the watch APPENDIX I
from entering ambient mode. As the study progresses, these
The following formulas are used as part of the evaluation
should be improved to optimize the battery life of the watch,
process.
which can currently last for roughly four hours of continuous
Information Gain [5]:
data acquisition.
Given more time, we would have liked to begin the second Inf oGain(Class, attribute) =
Entropy(Class) Entropy(Class|attribute) [4] Huynh, Tam, and Bernt Schiele. Analyzing Features for Activity
Recognition. Proceedings of the 2005 Joint Conference on Smart
Accuracy: Objects and Ambient Intelligence Innovative Context-aware Services:
Usages and Technologies - SOc-EUSAI 05 (2005): Web. 13 Feb.
TP + TN 2017.
acc = [5] InfoGainAttributeEval. InfoGainAttributeEval. Weka. Web. 15 Feb.
TP + TN + FP + FN 2017.
F-Measure: [6] Motion Sensors. Encyclopedia of Nanotechnology (2016): 2285.
Sensor-Introduction.pdf. Invensense, 26 June 2012. Web. 15 Feb. 2017.
precision recall [7] OpenSignal.com. OpenSignal Mobile Sensors. OpenSignal Mobile
F1 = 2 Sensors - OpenSignal. Web. 15 Feb. 2017.
precision + recall [8] Owen, Neville, Genevieve N. Healy, Charles E. Matthews, and David
True Positive Rate/Sensitivity/Recall: W. Dunstan. Too Much Sitting. Exercise and Sport Sciences Reviews
38.3 (2010): 105-13. Web. 13 Feb. 2017.
TP [9] Seifert, Dan. Asus ZenWatch Review. The Verge. 10 Dec. 2014.
TPR = Web. 06 Mar. 2017.
TP + FN [10] SmartDraw Cloud Notification. SmartDraw. Web. 06 Mar. 2017.
<https://cloud.smartdraw.com/>.
False Positive Rate (1 - Specificity): [11] Tapia, Emmanuel Munguia. Using Machine Learning for Real-time
FP Activity Recognition and Estimation of Energy Expenditure. Thesis.
FPR = Massachusetts Institute of Technology, School of Architecture and
FP + TN Planning, Program in Media Arts and Sciences, 2008. MIT Media
Lab. Web. 13 Feb. 2017.
The last four values were averaged over every class. [12] Weiss, Gary M., Jessica L. Timko, Catherine M. Gallagher, Kenichi
Yoneda, and Andrew J. Schreiber. Smartwatch-based Activity Recog-
R EFERENCES nition: A Machine Learning Approach. 2016 IEEE-EMBS Interna-
tional Conference on Biomedical and Health Informatics (BHI) (2016):
[1] Figo, Davide, Pedro C. Diniz, Diogo R. Ferreira, and Joao M. P. Car- Web. 13 Feb. 2017.
doso. Preprocessing Techniques for Context Recognition from Ac- [13] Wennman, Heini, Tommi Vasankari, and Katja Borodulin. Where to
celerometer Data. Personal and Ubiquitous Computing 14.7 (2010): Sit? Type of Sitting Matters for the Framingham Cardiovascular Risk
645-62. Web. 13 Feb. 2017. Score. AIMS Public Health 3.3 (2016): 577-91. Web. 13 Feb. 2017.
[2] Gjoreski, Martin, Hristijan Gjoreski, Mitja Lutrek, and Matja Gams. [14] Zaman, Kazi I., Sami Yli-Piipari, and Timothy W. Hnat. Kinematic-
How Accurately Can Your Wrist Device Recognize Daily Activities based Sedentary and Light-intensity Activity Detection for Wearable
and Detect Falls? Sensors 16.6 (2016): 800. Web. 11 Feb. 2017. Medical Applications. Proceedings of the 1st Workshop on Mobile
[3] Heron, Kristin E., and Joshua M. Smyth. Ecological Momentary Medical Applications - MMA 14 (2014): Web. 13 Feb. 2017.
Interventions: Incorporating Mobile Technology into Psychosocial and
Health Behaviour Treatments. British Journal of Health Psychology
15.1 (2010): 1-39. Web. 13 Feb. 2017.

You might also like