You are on page 1of 19

--------------------------------------------------------------------------------------------------------------------Introductory Notes to the Reader: This proposal (in an earlier draft form) was submitted

to an open innovation challenge seeking Assistive Technologies for All (Fall, 2014) with a
specific focus on assisting visually and physically (motor) impaired students (grade school
through high school or beyond) needing to take an academic assessment exam. The
Proposal was not selected as a winning proposal (possibly because of its
recommendations for using existing {proprietary} and open source technologies).
However, this author believes that the following proposal (a revised draft, but substantively
the same) serves as a valuable and useful overview of relevant technologies and,
additionally, contains various innovative ideas and alternative suggestions for assistive
technologies (in the context noted above and for the exam design itself) while also
proposing an omni-inter-accommodative framework for approaching these technical
challenges.
Lastly, I have included the complete and original End Notes (11 pages in all) which provide
extensive references, citations, and resources {URLs for research/development labs, etc.}
specific to the proposed technologies, development models, and official (WCAG) user
guidelines.
I hope that the reader will find some, or all, of this proposal of value.
Michael A. Ricciardi (April, 2015)
---------------------------------------------------------------------------------------------------------------------

Proposal for Technology Enhanced Items for All An Omni-Inter-Accommodative Approach

Title: Proposal for Assistive Technology-Enhanced Assessment Taking by Visual and


Motor-Impaired Examinees An Omni-Inter-Accommodative Approach
I. The Primary Assistive Technologies (AT) / Exam Augmentations Details and Notes:
Text-to-speech (TTS) translation capability (possibly including Speech-to-Text (STT)) for all
textual exam instructions, questions, and listed answers/objects [see: Section VII., End Notes,
pages 8-9 for more info]
A proposed text-to-voice (TTV, or, TTS) + speech-to-text (STT; see Note, below) program
would commence with a vocal cue by the examinee (following a vocal prompt from the program;
time-keeping, if required, would commence once the examinee indicates his/her readiness to
begin) include prompt questions: asking the examinee if he/she wants to begin, continue,
repeat, go back, change answer, etcall possible answers for each assessment question
are identified. If a question requires a fill in the blank type answer, then speech recognition
(SR), or speech to text (STT), with clarification prompts, will apply [see: voice mouse later in
this section]. This AT solution requires hearing and the ability to speak/respond vocally (if the
examinee is both deaf and blind, then this solution would not apply) [see notes, below].
Note: The use of STT assistive technology (in conjunction with TTS) will require that the
assessment exam program both recognize the speech inputs and also accurately
integrate/insert said speech inputs into their proper/correct place (exam question/response
space). From experience, this type of integration (with an existing UI) works well with
Dragon Speech Recognition software [see: End Notes, page 9; see also: voice mouse
in this section]. Commands like grab object, move object may require new coding.
As for drawing components: test designers should assess the degree of difficulty in the required
drawing (simple circles, squares, shapes, or line plot, etc.). For difficult drawing tasks, a program
would need to be developed that can assist in the drawing tasks required by an exam such that
the figure being drawn can be smoothed out, or filled in (this applies for those with hand
shaking or tremors) based upon a most likely shape/pattern parameter defined by the program
(parameters would be based upon the test items displayed). This may require specific software
development or augmentation to existing (proprietary or open source) software.
Important Note: It should be determined whether the true assessment here is technical
drawing skill, or rather, the ability to identify and render the correct shape or line plot
within reasonable parameters that can be readily monitored and assessed by the computer
(exam) program. [see above, also: section III., Note, page 4]
Shapes can then be selected vocally (again with prompts from the software). For physicallyimpaired persons, a voice mouse can be used [see: End Notes, QPointer, page 10 for link and
detailed article excerpt; also: see Note, below] to guide the cursor to the correct choice (be it a
multiple choice item, or digital display Keyboard for typing (via voice selection) of short or
long written answers, or selection of an appropriate shape, possibly even drawing a line plot).
This technology would be especially useful for any hot spot (identification) components of the
putative assessment/exam.
Proposal for Technology Enhanced Items for All An Omni-Inter-Accommodative Approach

Note: the current voice mouse technology is based on human screen content cognition
insight and artificial intelligence. The tech requires adaptation/learning as it is based
upon an average user profile. This makes use of the tech potentially problematic for
persons with speech impairments in addition to motor impairments. However, any such
average profile could/should be customizable (e.g., either through user training, or
through open source programming. [see: examinee-specific technology, section III.]
Clearly, the greatest challenge to providing access to all (as defined here) is for those persons
who are both sight and motor-impaired; haptic or telemetric technologies will be inadequate for
severely motor-impaired persons, as well as (possibly) visually-impaired persons. Also, with
persons who have advancing ALS or CP, voice-assist technologies (voice mouse, STT) may be
of limited utility but perhaps could be trained to recognize individual (highly altered/impaired)
speech patterns, just as personal associates (family, friends, therapists) learn to understand the
speech of severely speech-impaired persons. This will likely require a considerable degree of
practice/training prior to the taking of any formal exam. Speech Recognition (SR) technology
[see: Dragon SR software, the industry standard] has improved dramatically in recent years and
there is every indication that this trend will continue.
Note: many persons with ALS or CP use speech-assisted (or voice-assisted)
communication (i.e., a speech synthesis word/ phrase keyboard that permits selection via
touch). In the case of taking an assessment exam, the output of such a keyboard device
would need to be inputted to the SR program (assuming that a Public Address type
modality may be either distracting to others {in the exam classroom} and/or not
adequately recognized by the SR program (however, as the voice here is machinegenerated, it may be that recognition of an artificial voice communicator
inputted/wired to the exam program -- is better recognized by the SR program than any
spoken (unassisted) voice. This would need to be tested/verified.
Alternatively, or additionally, a computer-mounted (or on-board) camera integrated with eye
tracking software (for those not visually impaired), or eye-tracking glasses (worn by the
examinee [see: Tobii.com, End Notes, page 11]), could support the motor-impaired examinees
assessment testing. It is entirely possible that eye-tracking/control of a cursor could be used as
the primary assisting technology here but practice/familiarity will still be needed, and, the note
on drawing/spatial rendering (for a computer-based exam involving graphs, plots, shapes, etc.)
will also apply here [re: Important Note, page 1].
To this Authors knowledge, no comparisons of ease of use between these two proposed
technologies have been conducted, especially with sight and/or motor-impaired persons. Thus,
research and development towards the Seekers goal may need to include side-by-side subject
testing and comparison of these ATs [see also: End Notes, 6-step Innovation Design Model,
page 15]
This type of technological assistance may be more automatic (i.e., voice selecting from a
programmatic, voiced list/menu of options) for visually-impaired persons, whereas a motorimpaired (only) person could have a more robust assist tech that takes commands (like draw a

Proposal for Technology Enhanced Items for All An Omni-Inter-Accommodative Approach

simple five-sided object, in response to a polygon question), and/or, works in conjunction with
eye-tracking (where the eye movement moves the cursor, when in drawing mode).
II. Alternative/Augmenting Technologies:
Cursor Control by Motor Cortex Monitoring
There is also the possibility of using a neural interface system (NIS), or brain cap a tightfitting skull cap out-fitted with embedded electrodes that detect (pick up) neural activity
signals from the examinees motor cortex [see: Miguel Nicolelis et al; see also: BrainGate,
NeuroSky; see also: Section VII., End Notes, pages 12 -14]. By visualizing the appropriate
cursor movement (i.e., imagining it being moved by hand/joystick in a given direction) the
examinee can manoeuver the cursor with his/her thoughts to select the appropriate
choice/answer. It may even be possible (in the near future) for the examinee to visualize drawing
a geometric shape with the program trained for pattern recognition and filling in the desired
shape [see earlier Important Note]; again it is assumed here that precise technical drawing skill is
NOT the factor being assessed, but identification (and a fair rendering) of the correct shape [see:
Section III].
Neural interface head gear such as this is a major advancement over earlier NIS experiments
that required a direct brain-computer connection (a brain-implanted chip hard-wired to a
computer). However, the technology and signal processing algorithms that enable such NISs to
work are still being refined. The efficacy of this type of NIS are inconsistent from person to
person (the caps need to be fitted tightly on a persons skull; hair thickness/volume may interfere
with signal detection) and a user must practice considerably with such a device to get optimal
performance. That said, refinements in (neural) signal filtering, integrating, and rectifying are
advancing [see: https://www.delphion.com/details?pn=US24249302A1&wlref=17926854] and
we can expect far greater control over device utilization (and accuracy) via improvements in this
neuro-tech in the near future.
Note: see also: http://braingate2.org/communication.asp (locked-in syndrome)
However, use of an NIS device may not be effective or workable for sight-impaired persons;
visualization of the correct arm/hand movement (e.g., in the case of drawing a geometric
shape) may require that the visually-impaired (blind) person have, at one time in his/her life,
been capable of sight/seeing, such that visual memory of the motor skill is preserved, and can be
effectively retrieved. To my knowledge, no testing of this technology has been conducted on
blind-since-birth individuals. This factor may be a limiting one in this latter case.
Note: This adapting of differing technologies to the users needs and capabilities
represents a more accommodating approach to assessment taking, which is discussed in
the next section [see also: section III., Note, pages 4-5 for an additional AT
augmentation].
III. Rationales: An Omi-Inter-Accommodative, Examinee-Specific Approach

Proposal for Technology Enhanced Items for All An Omni-Inter-Accommodative Approach

Clearly, there is no one size fits all assistive technological solution here.
The primary assistive technology here -- the audio-enhancement (TTS) modality -- would ideally
be part of a suite of assistive technology (AT) tools that will be necessary for our examineespecific approach to test-taking. By this it is meant that the test-taking/examination program
should be as omni-inter-accommodative (this is a Bucky Fuller term) as possible, so that, upon
commencement of the exam, a tech guide or test administrator can select (if needed) the
appropriate examinee (technology) mode for optimal user-program interaction and test-taking.
A hybrid tool kit (i.e., a suite of software modules) that permits smooth switching between
assistive modes/technologies (e.g. TTS, eye-movement tracking, voice mouse and/or
voice/recognition, and even motor cortex monitoring) and/or the combining of AT modes, would
seem to be the optimal approach. If the choice is made to fully integrate all of the noted ATs,
designing assessment/testing programs to do this adaptive switching will of course add to the
cost of the assistive technology R & D. On the other hand, if a students needs (due to an
impairment) are known ahead of time (and we should assume that they are), the appropriate AT
(one the student is familiar with using) should be selected/set-up/installed prior to the
assessment. Any educational program that enables vision or motor-impaired students to take the
same assessments as non-impaired students must include introduction to, and training (practice)
with, these ATs. This is key to the accommodative part of omni-inter-accommodative.
To address a few more details/issues with any use of TTS technology, as noted previously:
Many assessment tests will include non-textual content (images, shapes, charts, etc.). Such nontextual content will still need to be vocalized for the user (if sight-impaired), or possibly
converted to braille [see: Section V.; WCAG compliance). Although conversion of the full exam
to braille and the provisioning of a braille keyboard -- is possible, the student will still,
presumably, be taking the assessment via a computer interface, and thus a braille UI display is
probably not practical or affordable (but a braille book of the exam could be provided). So,
barring the full availability/integration of braille functionality (i.e., both in the assessment/exam
output {instructions, questions, etc.} and student/examinee input), full TTS conversion of the
assessment exam will likely still be needed. This may prove to be a less cumbersome approach.
Note: A sight-impaired examinee could be aided greatly through the integration of a
tablet or digital pad to be used solely for the spatial/drawing (shapes, plots, graphs, etc.)
components of the assessment exam. A tablet (e.g. iPad) offers a relatively large field
(isolated from a computer screen, and easier to handle and utilize/draw on*) for such
drawing/rendering and can serve as a dedicated work space (input) to the exam
program. Thus, combining braille functionality and a separate drawing tablet (along with
the TTS module) may provide the complete, adaptive AT solution for a sight-impaired
examinee. This augmentation may also be useful for motor-impaired examinees [see also:
section II, Alternative and Augmenting ATs]
*A tablet app could be designed/integrated that recognizes key features of exam
components (required shapes, graphs, plots) and alerts the user (e.g., via a short
beep, or, vocalized acceptance) that s/he has drawn the shape correctly enough to

Proposal for Technology Enhanced Items for All An Omni-Inter-Accommodative Approach

be recognized and acceptable for the assessments purposes. This would


essentially be an Artificial Intelligence (AI) augmentation.
Also, in regards to the previous Important Note [page 1], it is important to make the distinction
between knowledge (identification of correct items) and technical skill (the physical drawing of a
shape, plot, or bar graph). Given that most technical (professional) usages/renderings of
geometric objects, line pots, graphs, etc. are already accomplished through automatic programs
(e.g., data analytics) or templates, it is not inappropriate (or an unfair advantage) to permit vocal
selection (e.g., grab object #2) in this manner by those with visual and/or motor impairments.
In the case of a dual-impaired examinee, it would seem that one basic assistive mode is
absolutely mandatory: complete text-to-voice translation of all examination material. This may
require an adjunct or plug-in program (for TTS conversion) or a customized software version
of the test [see: End Notes, pages 8-9]. In the latter case, no altering or redesigning of the
assessment software may be necessary other than possibly enabling optical character
recognition (OCR) in the core software. At the very least, this full TTS functionality should be
the AT default mode of the assessment exam.
Further, no technology or suite of technologies (an integrated package as it were) can be purely
stand alone in this context. It is presumed that many sight and motor-impaired students will
need some additional (temporary) assistance from actual persons (user agents [see: WCAG 2.0
guidelines, pages 17-18], teaching assistants, tech guides, etc.), for example, a tech guide or test
administrator, may need to verify that the AT (e.g., TTS, voice mouse, eye-tracking, or brain
cap) program(s) is/are working correctly, or, an examinee may need to be assisted with fitting of
ear phones (if required, so as not to disturb other examinees in the same area), or eye-tracking
glasses. Older students may be able to take their exams remotely (i.e. not in a formal classroom
setting). However, if a person has ALS or CP, or is completely blind, remote assessment taking
may not be practical or possible (barring an approved examinee assistant, working with the
student, from a home or remote location) [see: next section, IV.].
IV. Secure Assessment-Taking:
Regarding the issue of security, this would be a function of the assessment computer program
(logging in protocols, security code/security questions/responses, use of CAPTCHAs, etc.). If an
examinee is taking the assessment at his/her home, then secure access to an exam work space
site (e.g., an exam-linked ning) may need to be established. This could be modeled after the
educurious.com (http://educurious.org/about/) programs remote tutorial website (which, for ease
of use, would need to be simplified for the sole purpose of secure exam taking). This program
and web space was designed for high school students primarily, but could also possibly serve the
needs of younger students, but probably not too much younger than the 4th grade level. Younger
students may need to have in-class guidance and monitoring (user agents, as defined in WCAG
2.0); a remote exam-taking option may not be desirable or practical with younger students,
especially if sight/motor-impaired.
It is possible to lock down the exam-associated work/test space (if Internet-based, as with a
ning) once the examinee has logged-on. Once logged-on, the system prevents any new
Proposal for Technology Enhanced Items for All An Omni-Inter-Accommodative Approach

windows from appearing and/or the examinee from leaving the exam work space. If the exam is
not Internet-based (clearly the most practical way to provide security), then blocking access to
the Internet can be readily accomplished (as with school library or classroom computers, etc.). In
either event, active computer usage monitoring (by the sys admin, etc.), as is currently conducted
during FRE exams, should provide the necessary security (which may be more necessary for
remote exam taking).
Also, assessment exam-taking at home (or other remote location away from a formal school or
classroom scenario) could be made more secure (e.g., prevention of cheating or any non-techrelated assistance) through camera monitoring of the examinee. A laptop-integrated camera or
mounted web-cam (such as is typically used in VOIP tech) could be a requirement for any such
remote assessments and could be synced with the commencement of the exam (i.e., the test only
starts when the camera is on and/or the registered site-user is in the camera frame).
An additional security measure could be had if any assistive technology (AT) used has its own
registered ID code (registered to a specific user who must log-on with the registered AT item
code) and/or which activates only when that user is wearing or utilizing the designated AT (the
device communicates with the assessment program). However, any AT device must allow the
user to either stop (to take a break), or pause, and then resume use without a lengthy re-login
phase, especially where assessments are time sensitive. In this regard, we can assume that
assessment taking by sight/motor-impaired persons will take longer than non-impaired persons
taking the same tests. Thus extra time will need to be allowed for persons using AT during
exams.
V. WCAG 2.0 AA Guidelines Compliance:
This Solver identifies two main criteria/items from the official WCAG 2.0 guidelines that are of
particular relevance to this proposal. However, this should not be taken as a definitive WCAG
2.0 compliance assessment. A full WCAPG 2.0 compliance assessment would still be needed
upon completion/implementation/testing of the final AT suite/package [see: the provided link
in the End Notes, pages 17-18, for the full Assistive Technology criteria list link/page]:
1] Text Alternatives (http://www.w3.org/WAI/WCAG20/quickref/#text-equiv):
Guideline 1.1 Provide text alternatives for any non-text content so that it can be changed
into other forms people need, such as large print, braille, speech [Author note: TTS, STT,
etc.], symbols or simpler language. Understanding Guideline 1.1
[See: End Notes, page 18 for more complete details from the WCAG guidelines]
2] Compatible:
Guideline 4.1 Maximize compatibility with current and future user agents, including
assistive technologies.

Proposal for Technology Enhanced Items for All An Omni-Inter-Accommodative Approach

Authors Note: this requirement would seem to be compatible with the Solvers
Omni-inter-accommodative (hybrid) approach to AT [see: Section III].
Noted Advisory Techniques for Guideline 4.1:
Not displaying content that relies on technologies that are not accessibility-supported
when the technology is turned off or not supported. [note: this would seem obvious]
[See: End Notes, page 18]
Many of the listed guidelines in the WCAG are obvious and several criteria possess some overlap. Most of these criteria have been left out of this section as they list requirements/criteria that
would obviously be part of the AT design and implementation stages [see also: 6-step Innovation
Design Model, pages 15-16]. The two noted items (above) would seem to be those of particular
focus/relevance for the assistive technologies proposed here.
VI. Author Statement:
I have considerable professional experience with audio-visual tech (operation/installation/design)
in the field of Arts & Entertainment, including interactive technology (collaborative) design
experience (utilizing: speech recognition, virtual reality, 3d CGI) but am not a software
engineer/coder. Additionally, this Author has some 20 years of experience as an educator (in
private, public, and alternative school programs) including 7 years educating developmentally
disabled adults. I would be willing to help in the planning, design, testing and implementation
stages of any assistive technology project that emerges/results from this challenge.
VII. Sources / Resources:
TTS Technology see: End Notes, pages 8-9
SR/STT Technology - see: End Notes, page 9
Voice Mouse see: End Notes, pages 9-11
Eye-Tracking Technology see: End Notes, page 11
Brain-Machine Interface / NIS Technology - see: End Notes, pages 12-14 (see also:
affiliated sites, page 13)
(General) Assistive Technology Development Resource(s) see End Notes, pages 1415
Also: Improving the Quality of Innovative Item Types: Four Tasks for Design and
Development (PDF guide) see: End Notes, pages 15-17

Proposal for Technology Enhanced Items for All An Omni-Inter-Accommodative Approach

END NOTES
Text to Speech (TTS) technology:
There are currently many TTS companies offering TTS software/technology. This Solver has personally
used only the original AT & T natural voices version. However, this proposal is not intended to serve as
a product recommendation. The Seeker will need to sample and compare these technologies separately.
Here are several commercial providers of TTS technology (most providing life-like speech sysnthesis):

CereProc Text-to-Speech
https://www.cereproc.com/
AT&T Natural Voices Text-to-Speech Demo
www2.research.att.com/~ttsweb/tts/demo.php
Note: AT&T Labs has an active R&D program in TTS technology (also: speech-to-text)
TTS SDK | Speech Recognition (ASR)
www.ispeech.org/
Note: Talkz features Voice Cloning technology powered by iSpeech. iSpeech Voice
Cloning is capable of automatically creating a text to speech clone from any [digital text
source].
E-learning services: http://www.ispeech.org/instant.e-learning.text.to.speech
Contact: http://www.ispeech.org/commercial/contact
Linguatec - Text-to-speech technology
www.linguatec.net/products/tts/information/technology
Acapela group - Voice synthesis - Text to Speech | voice ...
www.acapela-group.com/
IVONA Text-to-speech
www.ivona.com/en/about-us/text-to-speech/
Oddcast - Text to Speech

Proposal for Technology Enhanced Items for All An Omni-Inter-Accommodative Approach

www.oddcast.com/technologies/tts/
Free Text to Speech
www.naturalreaders.com/

Resources:
Festival - Centre for Speech Technology Research
www.cstr.ed.ac.uk/projects/festival/
University of Edinburgh - Festival is a general multi-lingual speech synthesis system developed
at CSTR. It offers a full text to speech system with various APIs, as well an environment for ...
Using Text-to-Speech Technology Resource Guide - CAST
www.cast.org/system/galleries/download/pdResources/tts.doc

Speech Recognition Software (for STT integration):


Dragon Speech Recognition Software (Nuance) http://www.nuance.com/dragon/index.htm
Dragon software support: http://www.nuance.com/dragon/index.htm
Dragon SDK Client Support:
http://www.nuance.com/naturallyspeaking/products/sdk/sdk_client.asp
Dragon Webinar (on-demand) for Education: Dragon Speech Recognition for
Education
In this webinar, you'll learn how Dragon speech recognition software is used by students
of all abilities to improve core reading and writing skills, to reduce the stress and pressure
associated with homework and reports, and to help achieve their full potential. View the
webinar to learn more. http://www.nuance.com/dragon/dragon-naturallyspeakingwebinars/WebinarEducationRecording/index.htm
---------------------------------------------------------------------------------------------------------------Voice Mouse technology Cursor Control via Voice (an alternative to the above?):
http://www.israel21c.org/technology/voice-mouse-enables-hands-free-computer-navigation/
Proposal for Technology Enhanced Items for All An Omni-Inter-Accommodative Approach

an Israeli company has developed new technology that will allow anyone to operate a
computer, even those who are not able to point and click because they suffer from paralysis,
Parkinsons disease or other physical limitations.
Israel-based Commodio, Inc. has developed the worlds first Voice Mouse. In doing so,
Commodio, a small start-up based in Kfar Sava, has moved interaction with the computer from
the hands to the mouth. Using their product, called QPointer HandsFree, the screen responds to
the human voice in every way. Voice commands are responsible for every command
operation. No hand movements are necessary to navigate the Internet, write and send email, create and edit documents, point directly at any object on a computer screen, emulate
the mouse by performing drag-and-drop operations, and activating keyboard keys and
shortcuts.
QPointer HandsFree allows you to touch any object on a computer screen by voice, similarly
to the way a person uses a mouse, Ramy Metzger, President and CEO of Commodio told
Globes. With QPointer HandsFree users can turn their computers and other display-based
devices into voice-activated devices.
The user points at a screen object by saying the name of the part of the screen they wish to
use, such as words or toolbar buttons. Hint tags are then displayed next to all screen
elements of that part of the screen. The user speaks the name of the tag of the requested
screen object and the cursor jumps to the right place. Any mouse command can be
emulated with the user saying that command (e.g., double-click).
We realized that pointing and writing devices are inconvenient to use. We use them now
because theres nothing else. Were just used to it, says Commodio cofounder and CTO Dr.
Leonid Brailovsky. Even normal persons working for hours with a mouse get wrist pains at
some point. At first, we considered developing a product that would eliminate using a mouse, but
then we realized that our product also enables us to get rid of the keyboard, and we continued in
that direction.
QPointer HandsFree is based on Commodios proprietary technology that analyses screen
content, combined with a speech recognition engine provided by Microsoft. Commodios
technology is based on human screen content cognition insight and artificial intelligence.
We didnt develop the speech engine itself, Brailovsky notes. Microsoft, which did, is letting
us use it for free for our developments in the field (Microsoft markets the engine through its
Office XP software, B.G.).
The system requires some adaptation, since it is designed for an average profile. Users
present their profiles by dictating text. The system can store a number of profiles,
according to the number of users.
Microsoft has chosen Commodio as its vendor of choice for voice operation. Were
currently cooperating with Microsoft, and we may expand our cooperation in the future. For
now, were marketing the product in the market for computer access products Assistive
Proposal for Technology Enhanced Items for All An Omni-Inter-Accommodative Approach
10

Technologies. Were the only ones so far providing access to all computer software through
voice commands.
There are products that work require full integration with every single program, which
significantly reduces the possibilities for supporting applications. Our product doesnt need
integration, because its based on what physically appears on the screen, regardless of what
software is being operated. That give wide range of action, as you say what you see, without
relying on complicated commands.
According to Brailovsky, QPointer HandsFrees potential market extends far beyond those that
are unable to use a conventional computer operating system.
I dont have to educate an entire market to throw away their computer mice and keyboards,
since Im appealing to people who cant use mice and keyboards, and therefore can adapt much
more quickly. For example, we appeal to amputees, or those who suffering from palsy, and find
it difficult to operate a mouse accurately. We also target early adopters, who like trying new
things after all, its very sexy to surf the Internet without using your hands.
Commodio, Inc was incorporated in the US in 2000 and currently employs 10 people. Its
marketing and sales office is located in Houston and the R&D center is located in Kfar Sava.
The requirements for operating the Commodio system are quite modest: a Pentium 3500
megahertz computer, and a microphone supplied together with the system. The product is
sold on the US market for $189.
Note: [resources] Assistive Technologies: Commodio, Inc. (partner vendor for Microsoft)

Eye-tracking technology:

Introducing Tobii Technology http://www.tobii.com/


Tobii works with developers and manufactures to create mind-blowing user experiences with
computers, games and other devices. With Tobii EyeX we create an intuitive interface that uses
your natural eye movements.
Tobii also helps people with disabilities to communicate and gain more independence. To some,
eye control is the only way to access computers and have a voice.
Eye tracking is used in research as a means for insight into human behavior. Tobii is the
preferred provider of eye tracking solutions to thousands of researchers worldwide.
Other Augmentative Assistive Communication (AAC) technologies (hardware):
http://www.tobii.com/assistive-technology/global/products/hardware/
Proposal for Technology Enhanced Items for All An Omni-Inter-Accommodative Approach
11

Brain (motor cortex) Monitoring/Sensing Technology:


Primary Research (original brain-machine interface {BMI}, neuro-prosthetics research) Laboratory of Dr. Miguel Nicolelis / www.nicolelislab.net/

Resource / contact info:


(919) 668-6031
Box 3209 Dept of Neurobiology
Duke University Medical Center
Durham, NC 27710

Neural Interface Systems (NISs) Developed Technologies:


BrainGate (www.braingate.com)
The BrainGate Co. is a privately-held firm focused on the advancement of the BrainGate
Neural Interface System. The Company owns the Intellectual property of the BrainGate
system as well as new technology being developed by the BrainGate company. In addition, the
Company also owns the intellectual property of Cyberkinetics which it purchased in April 2009.
The goals of the BrainGate Company is to create technology that will allow severely disabled
individualsincluding those with traumatic spinal cord injury and loss of limbsto
communicate and control common every-day functions literally through thought.
The BrainGate Company was founded by Chairman, Jeffrey M. Stibel and is led by a seasoned
team of entrepreneurs whose goal is to advance movement through thought alone. This is
achieved through partnership with leading academic institutions, corporations, and various nonprofit and government organizations working on the research, science, and development of
applied commercial technology.
Our mission is to improve of the quality of life for all disabled humans. We additionally seek to
increase the usage of BrainGate related technology in both medical and non-medical applications
and facilitate innovation in invasive and non-invasive brain research.
Resource contact:
For more information, please contact us by email:
IP@braingate.com - patents
BD@braingate.com - partnerships
press@braingate.com media and analysts
research@braingate.com academic, non-profit, and government
Proposal for Technology Enhanced Items for All An Omni-Inter-Accommodative Approach
12

See Also: http://braingate2.org/


A team of physicians, scientists, and engineers working together to study the brain and develop
neurotechnologies for people with neurologic disease, injury, or limb loss.
BrainGate System:

http://braingate2.org/braingateSystem.asp
A long-term goal of the BrainGate research team is to create a system that, quite literally, turns thought
into action and is useful to people with neurologic disease or injury, or limb loss. Currently, the system
consists of a sensor (a device implanted in the brain that records signals directly related to imagined
limb movement); a decoder (a set of computers and embedded software that turns the brain signals
into a useful command for an external device); and, the external device which could be a standard
computer desktop or other communication device, a powered wheelchair, a prosthetic or robotic limb,
or, in the future, a functional electrical stimulation device that can move paralyzed limbs directly.
Note: Electrode arrays (4x4 mm implants) detect neuron action potentials, multi-unit activity,
and local field potentials (of neural networks)

Resource contacts:
[Affiliated Sites]

Brown Institute for


Brain Science
Brown School of
Engineering
Brown University
Case Western Reserve
University
Cleveland FES Center

Harvard Medical School


MGH Neurology
MGH Stroke Service
Ongoing Clinical Trials
Rehabilitation R&D
Service, Dept. Veterans
Affairs

Stanford Neural
Prosthetics Translational
Laboratory (NPTL)
Stanford Neurosurgery
Stanford Hospital and
Clinics
Stanford School of
Medicine

- - - - -- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Related (brain monitoring) Technology:


http://neurosky.com/
NeuroSky game/bio-sensor manufacturer using neural-control interfaces (NISs)
Measuring Electroencephalogram (EEG) activity has historically required complex,
intimidating and immovable equipment costing thousands of dollars. NeuroSky is
unlocking a new world of solutions for education and entertainment with our researchProposal for Technology Enhanced Items for All An Omni-Inter-Accommodative Approach
13

grade, mobile, embeddable EEG biosensor solutions. Precisely accurate, portable and
noise-filtering, our EEG biosensors translate brain activity into action.
Our EEG solution digitizes analog electrical brainwaves to power the user-interface of
games, education and research applications. We amplify and process raw brain signals to
deliver concise input to the device. Our brainwave algorithms, developed by NeuroSky
researcher and our partner universities and research institutions are uncovering new ways
to interact with our world.
http://neurosky.com/products-markets/eeg-biosensors/
Resource contacts/tools:
Partners and Development program:
http://neurosky.com/partners/
NeuroSky's Developer Tools make it easy to create innovative apps that respond to a user's brainwaves
and mental states. Our newpreview on PC/Mac includes a Beta of the Mental Effort and Familiarity
algorithms. Check out both our Developer Tools across mobile platforms and our new algorithms on
.NET.

General Assistive Technology (Development) Resources:

Assistive Technology Resource Center www.wpi.edu/academics/me/ATRC/


Worcester Polytechnic Institute
Worcester Polytechnic Institute's Assistive Technology Resource Center ... sensory, and cognitive
disabilities, the Center aims to develop personalized assistive ...

Assistive Technology Resource Centers of Hawai'i (ATRC) www.cds.hawaii.edu/atrc/


Assistive Technology Resource Centers of Hawai'i (ATRC) in collaboration with CDS ... The CDS/UAP is
supporting (1) development of a Tech Center training ...

Assistive Technology Lab - Resource Center


Proposal for Technology Enhanced Items for All An Omni-Inter-Accommodative Approach
14

www.harborrc.org/resources/center/tech-lab
The Assistive Technology (AT) Lab at Harbor Regional Center assists children ... More and more
technological devices and applications are being developed for ..

Innovation Design and Development - Reference (PDF):

Improving the Quality of Innovative Item Types: Four Tasks for Design
and Development
http://www.testpublishers.org/assets/documents/Improving.pdf
[excerpted content]

Innovations should be purposefully selected so that they serve the purpose of


expanding or improving measurement. It is also important to note that adding
technology brings with it the potential for unexpected modifications of the
construct being measured. For example, there is a risk of introducing
construct-irrelevant variance through a poor user interface or unclear
response action requirements. Furthermore, it is possible that presenting
stimuli through multimedia instead of reading text may actually alter the
construct being measured. On the other hand, if the technology or other
innovation is incorporated purposefully, then any changes to the construct
should be positive.
A Model for Designing Innovative Item Types
One model for the design of innovative item types involve s a 6-step process
that includes several rounds of review and revision (Parshall & Harmes,
2008). The steps in this model are: 1. analyze the exam programs
construct needs, 2. select specific innovations for consideration, 3. design
initial prototypes for internal discussion, 4. iteratively refine the item
type designs, 5. conduct a pilot test of the innovative item types, and 6.
produce final materials.
The tasks presented in this paper are conducted as part of Step 4, when this
full model is implemented. The model is provided in Figure 1.
Proposal for Technology Enhanced Items for All An Omni-Inter-Accommodative Approach
15

The first step of this model, analyze the exam programs construct needs
, consists of a thoughtful consideration of the exam program's current
measurement successes as well as an identification of weaker or
even missing areas.
Step 2, select specific innovations for consideration
, turns the focus on approaches to innovative item types that may be used
to address those construct needs.
In Step 3 the test developers, in collaboration with subject matter experts
(SMEs), begin to define the new item types for the exam program, based on
the selected innovations. Once a preliminary item type design has been
specified, then an initial prototype is designed for preliminary consideration
by internal exam program stakeholders. This initial review phase is likely to
result in some modifications to the item type, prior to Step 4.
Step 4, iteratively refine the item type designs, is the most extensive step in
this model, in part due to its iterative nature, and is the focus of this paper.
Three related set of activities are undertaken in a series of interconnected
rounds or iterations. The three activities are: develop initial item writing
materials and sample items, conduct usability testing on the sample item
types, and conduct extensive stakeholder reviews. Within each iteration,
feedback is input and revisions are attempted in order to arrive at an
improved item type design. It is anticipated that most proposed item type
designs will need to proceed iteratively through all the Step 4 activities in
several successive cycles to be fully defined and appropriately specified.
Step 5 of the model is conduct a pilot study . The pilot effort should occur
after the item type designs have been iteratively revised and have reached a
satisfactory level of quality. Pilot testing of the new item types should include
a test of all exam program systems, such as: item banking, test publishing,
test delivery and administration, examinee response capturing, item analysis,
and test scoring. Once item types have been successfully pilot tested, they are
ready for operational implementation.
In the final step, Step 6, all relevant exam program materials and
documentation are updated to include the new item types. This complete 6Proposal for Technology Enhanced Items for All An Omni-Inter-Accommodative Approach
16

step model for item type design is intended to help exam programs add
innovative items that are of high measurement quality, logistically practical,
and acceptably affordable. Nevertheless, an exam program might elect to
undertake only some of the activities, iterations, and steps in this model.
WCAG 2.0 AA Guidelines Compliance:

Web Content Accessibility Guidelines (WCAG) 2.0


W3C Recommendation 11 December 2008
This version:
http://www.w3.org/TR/2008/REC-WCAG20-20081211/
Latest version:
http://www.w3.org/TR/WCAG20/

How to Meet WCAG 2.0


A customizable quick reference to Web Content Accessibility Guidelines 2.0
requirements (success criteria) and techniques
http://www.w3.org/WAI/WCAG20/quickref/

Solver focus (two selected/featured criteria):

Text Alternatives:
Guideline 1.1 Provide text alternatives for any non-text content so that it can be changed into
other forms people need, such as large print, braille, speech, symbols or simpler
language.Understanding Guideline 1.1
Non-text Content:

1.1.1 All non-text content that is presented to the user has a text alternative that serves the
equivalent purpose, except for the situations listed below. (Level A)Understanding Success
Criterion 1.1.1
Proposal for Technology Enhanced Items for All An Omni-Inter-Accommodative Approach
17

Controls, Input: If non-text content is a control or accepts user input, then it has a name
that describes its purpose. (Refer to Guideline 4.1 for additional requirements for controls
and content that accepts user input.)
Time-Based Media: If non-text content is time-based media, then text alternatives at
least provide descriptive identification of the non-text content. (Refer to Guideline 1.2 for
additional requirements for media.)
Test: If non-text content is a test or exercise that would be invalid if presented in text,
then text alternatives at least provide descriptive identification of the non-text content.
Sensory: If non-text content is primarily intended to create a specific sensory experience,
then text alternatives at least provide descriptive identification of the non-text content.
CAPTCHA: If the purpose of non-text content is to confirm that content is being
accessed by a person rather than a computer, then text alternatives that identify and
describe the purpose of the non-text content are provided, and alternative forms of
CAPTCHA using output modes for different types of sensory perception are provided to
accommodate different disabilities.
Decoration, Formatting, Invisible: If non-text content is pure decoration, is used only
for visual formatting, or is not presented to users, then it is implemented in a way that it
can be ignored by assistive technology.

See: http://www.w3.org/WAI/WCAG20/quickref/#text-equiv for Sufficient Techniques for


1.1.1 - Non-text Content

Compatible (http://www.w3.org/WAI/WCAG20/quickref/#ensure-compat) :
Guideline 4.1 Maximize compatibility with current and future user agents, including assistive
technologies.Understanding Guideline 4.1
Advisory Techniques for Guideline 4.1

Avoiding deprecated features of W3C technologies (future link)


Not displaying content that relies on technologies that are not accessibility-supported
when the technology is turned off or not supported.

Parsing:

4.1.1 In content implemented using markup languages, elements have complete start and end
tags, elements are nested according to their specifications, elements do not contain duplicate
attributes, and any IDs are unique, except where the specifications allow these features. (Level
A)Understanding Success Criterion 4.1.1
Note: Start and end tags that are missing a critical character in their formation, such as a closing
angle bracket or a mismatched attribute value quotation mark are not complete.

Proposal for Technology Enhanced Items for All An Omni-Inter-Accommodative Approach


18

You might also like