You are on page 1of 11

SEAtS Software

Evaluation Report

Learning Analytics Tool


2

RATIONALES

Learning Analytics Tool- SEAtS Software

SEAtS Software promotes itself as a student success platform. It appears to combine both a Student
Information System (SIS) and Learning Management System (LMS) into one. Slade & Prinsloo
(2013) identify the importance of student as agents who can “voluntarily collaborate in providing data
and access to data to allow learning analytics to serve their learning and development” (p. 1529).
Consequently, I actively searched for a Learning Analytics (LA) tool that would provide this
affordance. Another aspect that lead me to select SEAtS software to evaluate was that the company
highlighted how multiple stakeholders (student, faculty, and administration) could use and benefit from
this software and also boasted of several reputable customers across Canada, America, and the United
Kingdom, including Athabasca University. There was also a lack of online reviews from current users
about this software. Consequently, for these reasons, I decided to provide an evaluation of the learning
analytics of SEAtS Software.

The Evaluation Framework

My learning analytics evaluation framework is modelled off both the frameworks from Cooper (2012)
and Scheffel et al. (2014). This framework was developed for K-12 educators and administrators to
support their search and evaluation for a learning analytics tool. The Cooper framework chose to step
away from “soft factors”, elements with subjectivity like cultural context which featured previously in
the original framework by Greller & Drachsler. Their rationalization was to focus on hard factors first
and to debate on the emerging soft factors later. Scheffel et al. (2014), categorized outlier statements
into a cluster called Acceptance and Uptake because of their inability to fit clearly with any other
cluster. This led to an ad hoc criterion called organizational aspects which would feature some of these
soft skills. My belief was that the soft skills should not be delayed or ignored but instead must be
intentionally woven within the framework because although they do not necessarily contribute to the
selection and analysis of a learning analytics tool, they do need to occur in order for the successful
initiation of the search. Consequently, this resulted in the creation of a check list to be used before
commencing in the evaluation of a learning analytics tool using the reworked framework.
3

Before commencing in the evaluation of learning analytics tools, a high school educational institution
should first identify what LA software’s are available and their cost to see if the school budget could
afford to implement them should they be selected. Next the infrastructure of the school would need to
be analyzed, specifically the school network. Most schools operate on Local Area Networks (LAN)
and connectivity should first be inspected. Next, there would need to be a check of what devices were
available that the selected learning analytics software or browser-based program would be downloaded
or accessed on. Access and compatibility for all intended users would need to be assessed.

Before the searching and implementation of a LA software, acceptance from a variety of stakeholders
should be sought out. Scanning the educational culture to ensure that a growth mindset is present
would afford mass organizational change. Moreover, stakeholders would need to be convinced of the
benefits of LA in order to be motivated to use it once implemented.

Lastly, a check that LA could be implemented into the high school would be necessary. Stakeholders
would need to be provided both time and training on how to use the new LA tool. To achieve this
transition, both a team or individuals to provide IT support for the network, software, and devices
issues as well as a team or individuals to provide LA support on how to use the new software would be
needed. Once the checklist is complete, the school would be ready to commence their search of a LA
tool. Once the search for an LA tools has begun, a framework is then needed to help evaluate and
differentiate the available LA tools. Below is the proposed framework.
4

This framework is a condensed version of the Cooper framework which provides the context for
evaluation. The key difference is that this framework includes the Scheffel et al. (2014) quality
indicators of both importance and feasibility. It is proposed that after Analysis and Data Origin of an
LA tool has been evaluated, that it also receives a rating of the three identified quality indicators on a
scale of one to three, one being weak and three being strong.

• Transparency- refers to both what information is provided to all stakeholders and how
accessible that information is (i.e. presentation).
5

• Data Ownership- refers to who has access and control to how data is created,
recorded, collected and monitored. It would also look at any potential data outsourcing
or transfers to third parties.

• Privacy and Ethics- refers to what kinds of data is collected and where it is stored, for
example, Canadian versus American data centers. Also, if it follows the Freedom of
Information and Protection of Privacy Act (FIPPA). For example, what kind of de-
identification tools are available or do stakeholders have the ability to provide consent
or opt out.

After Orientation & Objectives and the Technical Approach of an LA tool have been evaluated the
following would be rated on the same three-point scale:

Actionable Insights- The computer methods used to determine at risk students and both the
computer and human generated recommendations that result from these detections.

Student Agency- refers to the motivation of students to use the LA tool for their own regulation
and accountability.

Transformative- Refers to how useful the LA tool would be to stakeholders. Is it comparable


to other LA tools or does it provide new information and opportunities? The question should
be asked, how effective, efficient and helpful will this potential LA tool be for all stakeholders?
6

Evaluation: Student Retention

Analysis Subjects: Students

Analysis Clients: Academic & Administrative Staff and Students

(Students are loosely classified as Analysis Clients as they can be a part of the alert
Analysis system and consequently could adjust their own learning without further external
intervention.)

Analysis Objects: Students

Private Data: SEAtS uses reactive and non-reactive data, and


integrates and processes them to identify and isolate risk indicators that
drive current retention, progression and graduation metrics. SEAtS
collects data into a single repository. This supports teachers in data
literacy to better collect and interpret the data (Ifenthaler, 2016).
Data Origin
Data Quality & Source- SEAtS does not collect new data, it only
“sits above and draws information” from whatever student records,
timetabling and other systems are already in place. Quality will depend
on the size of the educational setting, for most high schools this would
likely be small to modest. Educational settings follow FIPPA policies
on personal data use and sharing.

Transparency 1 2 3
Ratings: Data Ownership 1 2 3
Privacy & Ethics 1 2 3

Orientation
7

Past: [Diagnostic] to look at student attendance, engagement


and performance attrition.

Present: creates real time Early Warning Systems (EWS) alerts

Orientation & via e-mail and SMS to both academic and administrative staff on
Objectives populations of students who may be ‘at risk’ such as those
displaying disengagement with the course. EWS alerts can also
be expanded to include the ‘at risk’ student as well.

Future- [Extrapolation] data manipulation can be presented to


students about future potential outcomes. For example,
showing students both their current data and making minor
changes to attendance or grades can demonstrate and visualize
potential future results.

Objective Type: Performance management (to increase


retention, progression, attainment, and graduation metrics).

Traditional
Descriptive model that uses rules-based analytics to generate
data sets. Data is harvested from existing SISs, VLEs, and
LMSs. Tolerance levels can be set by individual institutions for
different educational contexts and breaches of tolerances can
create and assign calls to actions to both staff and student.

Technical Approach Machine


Supervised Machine Learning- once enough historic data is
collected, models can be trained to create refined predictive
profiles of potential ‘at risk’ students.

Simulation- Although not entirely clear, SEAtS sites a benefit of


their software to be able to model risk factors to build a Student
Retention and Completion model that reflects the realities of
within a given educational institutions.
8

Presentation- A campus wide and customizable dashboard that


can unify both physical and digital data to create personalized
student profiles of engagement, retention and achievement.
Data is presented with visual aids like static and live charts.

Actionable Insights 1 2 3
Ratings Student Agency 1 2 3
Transformative 1 2 3

SEAts states that “Research has shown that engagement and


Embedded Theories
attendance monitoring identifies[sic] students most in need of support
and Reality
and enables university staff to intervene to improve the chances of a
student’s successful course completion”. (SEAtS Software, 2019).
However, no direct evidence of any theory has been provided and no
supporting literature has been provided on their website or any
supporting publications.

SEAtS software perpetuates the dominant mindset of current learning


applications by focusing on performance management specifically
with the ability to improve or increase positive behaviours in students.
It achieves this through the usual means of harvesting data from the
current educational system and presenting it to analysis clients.
Moving through the framework we see that SEAtS strategically
piggybacks on existing LMS, SIS, and VLE’s of the educational
institution it partners with. This is strategic in that the responsibility of
the data is shared and owned by the school and pre-existing third
Comment
parties it works with. Depending on individual school goals and
objectives, which in this scenario is performance management, SEAtS
9

suggests which data they will need to collect and store in their data
repositories.

One component of the SEAtS software that separates it from


traditional business intelligence is the use of assisted machine learning.
Initially, the SEAtS software will only serve to provide past, present,
and future information through the form of reporting, alerts and
extrapolation. However, after time and once historic data sets have
been built, SEAtS can begin to build models that can be trained to
create refined predictive profiles of potential ‘at risk’ students that
provide some modelling and recommendation insights.

This company makes bold claims but provided little specifics on how
their software worked. It repeatedly highlighted the different types of
educational data it utilized and how the LA could support increased
student performance but failed to mention the following:

• Differentiation between their software use for learning and


academic analytics
• Connection to learning theories
• Focused highly on Outcome data with limited mention of input
data
• Examples or case studies of metric data used
• What types of methods are used for predictive analytics
• Discussion on privacy or ethical use of data
• No reviews or testimonials could be found from current or past
clients on their website or in a general web search.
10

RECOMMENDATION
I would not recommend this tool for myself or colleagues because there are too many
unanswered questions. The key hesitance is the lack of grounding in learning theory as well as
knowing what methods for predictive analytics are being used (Simple? Linear? Logistic? etc.).
The SEAtS Software platform is promising and has an appealing UI/UX design. It also could
transform current teaching and administrative practices should the claims be sound. I would
recommend that further investigation be completed and perhaps a trial use of their software first
before purchasing.
11

REFERENCES

Cooper, A. (2012). A framework of characteristics for analytics. CETIS Analytics Series, 1(7).

Bolton, JISC CETIS.

Ifenthaler, D. (2016). Recorded for the MOOC Analytics for the Classroom Teacher offered by

Curtin University, Australia. [YouTube, 3 mins.]

SEAtS Software | Student Success Platform. (n.d.). Retrieved February 22, 2019, from

https://www.seatssoftware.com/

Scheffel, M., Drachsler, H., Stoyanov, S. & Specht, M. (2014). Quality Indicators for Learning

Analytics. Journal of Educational Technology & Society, 17(4), 117-132.

Sclater, N. (2017). Chapter 9. Metrics and Predictive Modelling, in Learning Analytics

Explained (pp. 88-98). New York, USA: Taylor & Francis.

Slade, S., & Prinsloo, P. (2013). Learning Analytics: Ethical Issues and Dilemmas. American

Behavioral Scientist, 57(10), 1510-1529.

You might also like