Hi Mary Jean,

 

I am working on a conference presentation of this information that will include more detail and case examples with the clinician and end-user in mind.  I’m giving you the Cliff Notes, much of which I’m sure is familiar to you.  My goal was a clinically-friendly guide that could be repeated (used over and over again), observed (directly measurable) and tested (gives you numbers), because I get asked so frequently about collecting evidence to support the matching person and technology and trial period processes.  You might be able to tell from the cut and paste that families and adults using AAC seem to be asking the most questions.

 

The HCI literature tends to prefer text-copy tasks over text-creation tasks, because of the increases the attention demand for generating self-created utterances.  I feel that introducing an AAC system has high attention demand so I start with copying.  Also, I thought this would allow clinicians to make more efficient comparisons across sessions in a timely manner with clients.  Once a client is over the initial introductory period, add text-creation tasks to the data collection. 

 

The sentence set developed in my lab is based on high frequency vocabulary and length of utterance rather than letter frequency and character length, because that would be more typical of how AAC performance is influenced. Randomly selecting a group of sentences (3-7 words) should result in an MLU of about 5 words for the sample.  Five words are easy enough to copy or commit to memory to avoid increasing focus of attention (FOA) demands, too.

 

Basic considerations on procedures:

 

Your instructions need to be the same.  Do you want your client to “select keys as quickly as possible” or “as quickly and accurate as possible,” or “try not to make a mistake.”  Do you want them to correct errors or not stop to correct errors?  Many people want to correct their mistakes. Just write down instructions so they’re repeatable.

 

Decide on your data collection methods.  Can you take advantage of LAM or data logging to record time stamps and events?  Will you manually record start and end times and number of errors?  Automatic recording makes the analysis process much easier and accurate.  My grad and doc students upload and generate PeRT reports in about 5 minutes for a 1 hour session. 

 

 Depending on the selection technique, the client may require more time to become familiar with basic text entry.  You just need to make sure you’re consistent and/or record the time allowed to practice - keeping it the same across sessions.  You’re making the decision about what variables are being manipulated on the AAC systems, e.g. number of locations, key sizes, symbols, letters, whole words, color coding, acceptance times, etc.  

 

To warm-up and observe range of access you can do basic target practice.  Be sure to include the center and corners of the display as targets (Hill, Romich & Spaeth, 2000). 

 

Decide what to observe (and compare?).  You could be testing the time and accuracy to select locations to copy sentences using:  1) a 60 location QWERTY alphabet array on a touch screen with single-digit direct selection, 2) a 90 location language program with spelling and orthographic word selection with headpointing, or 3) a 144 location language program with all three language representation methods using eye gaze.    For example, is a client’s performance being observed on 2 different eye gaze technologies or is performance being monitored over time on the same system.

 

In summary, I emphasize that this is the Cliff Note version, and is not intended as a research model.  I’m sure people reading this could easily identify more variables that need to be controlled, especially, if they’re doing group or single-subject research studies.  However, for clinical purposes, I don’t believe we need to be as concerned about whether the effect generalizes to other clients, BUT whether the effect is due to the conditions being tested with a client.  For ALS clients, having these data help to identify promptly deterioration in skills that otherwise may go unobserved and lead to disuse of technology.  Finally, I haven’t discussed using social validation procedures to support or supplement the performance data.  Nevertheless, quantitative data is important for showing that decisions are not based on impressions of effectiveness or our good intentions, but on measurable outcomes.

 

I’d be very interested in what other procedures are being used to support clinical decisions of ensure client/family confidence in the matching person and technology process.  I’ve included some favorite references below as external evidence.

 

Hope this helps,

 

Katya

 

 

References

 

Cairns, P. & Cox, A. L.  (Eds.).  (2008).  Research methods for human-computer interaction.  New York: Cambridge University Press.

 

Hill, K., Romich, B., & Spaeth, D. (2000).  AAC selection rate measurement: Tools and methods for clinical use.  In Proceedings of the RESNA '00 Annual Conference. Orlando, FL.  July.  pp 61-63.

 

Jagacinski, R. J., & Monk, D. L. (1985).  Fitts’ law in two dimensions with hand and head movements. Journal of motor behavior.  17(1), 77-95. 

 

MacKenzie, I.S. & Soukoreff, R. W. (2003). Phrase sets for evaluating text entry techniques.  Extended Astracts of the ACM Conference on Human Factors in computing systems-CHI 2003 (pp. 754-755).  New York: ACM Press.

 

MacKenzie, I. S. & Tanaka-Ishii, K.  (Eds.).  (2007).  Text entry systems: Mobility, accessibility, universality.  New York: Morgan Kaufmann Publishers.

 

 

From: xxxxxx@xxxxxxxxxxxxxxxxxxxx [mailto:xxxxxx@xxxxxxxxxxxxxxxxxxxx] On Behalf Of Mary Jean Dyczko
Sent: Monday, February 09, 2009 8:36 AM
To: xxxxxx@xxxxxxxxxxxxxxxxxxxx
Subject: RE: [Assistive Technology] RE: more new pproducts

 

Good Morning Katya

 

I would be very interested in hearing about additional references and more detailed guidelines on your data collection procedures.

 

Thanks

Mary Jean

-----Original Message-----
From: xxxxxx@xxxxxxxxxxxxxxxxxxxx [mailto:xxxxxx@xxxxxxxxxxxxxxxxxxxx]On Behalf Of Hill, Katharine J
Sent: Saturday, February 07, 2009 11:22 AM
To: xxxxxx@xxxxxxxxxxxxxxxxxxxx
Subject: [Assistive Technology] RE: more new pproducts

ATIA was great for playing with new products.  I saw new products from oversees being displayed by vendors trying to compete in our market.  I don’t think other conference exhibit halls provide this level of diversity for AT products.  I spent much my of exhibit hall time comparing the eye gaze systems.  I’ve used the Madentec Tracker with clients and have been able to monitor performance using it, so I have previous data and feel comfortable demonstrating and explaining the learning process.  I haven’t been able to collect personal or user data on any eye gaze system until ATIA. 

 

The only system I was able to “borrow” was an ECOpoint for an evening.  Since ECOpoint uses the Tobii technology, I felt I was covering getting a solid experience with that basic technology even through I've had several demonstrations of My Tobii previously.  The rep demonstrated the feature and calibration process and then calibrated “my eyes.”  Using the default settings, I practiced for about 5 minutes on acquiring moving targets, then switched to Unity 144 sequenced to practice.  I was immediately able to acquire desired targets.  With the borrowed system, I performed the routine data collection procedure using the built-in language activity monitor (LAM) that I use in clinic/lab.

 

1.        Randomly selected 10 target sentences to copy from a list of control sentences that were developed specifically for comparing AAC access.  The words in the sentences have been selected from a top 100 core vocabulary list that was compiled at the AAC PAT (Performance and Testing) Teaching Lab, e.g. “What do you think about that?”  “He did not know her.”

2.       After turning on LAM or datalogging, I started to generate the target sentences with the system set at the default acceptance setting (1.0 sec), but found that I could move to a faster setting (.7 sec) after 2 sentences.  I copied sentences using both spelling and semantic compaction.  I didn’t turn on word prediction, because as we all know that wouldn’t provide any significant rate improvement anyway.  (If you want to compare word prediction, make sure to turn off recency or don’t use a word that has been selected by predication again.  Otherwise, you’ll confound your results.)

3.        I experienced no calibration issues or need to re-calibrate.  I now have personal performance data on myself for a first session, and hope I get an opportunity later for additional sessions to monitor improvement in my performance.  I used PeRT to calculate my selection rate, average and peak communication rates, communication rates for language representation methods, and frequency of errors.  My first time performance showed that icon sequencing was twice as fast as spelling, but I’ll save my rates for myself; selection errors were at 5%. 

4.       The whole testing took me about 15 minutes to copy the target sentences and 5 minutes to analyze and generate a PeRT report.  Then I was off to dinner with friends.

5.       In summary, I’m concluding that the Tobii and PRC eye gaze technology would perform similar for me.  (Of course the ECOpoint would be about twice as fast using Unity, because of icon sequencing.)  However, I still would like to see what performance differences would be observed among other available language programs.

 

With the booth traffic I wasn’t able to spend much time with any one exhibitor to get much beyond the calibration process, although I've had previous opportunities with eye gaze systems that have been on the market for awhile. I tried the EyeTech technology at several booths, since the EyeTech system is used with several manufacturers including FRS and Dynavox EyeMax (renamed).    However, reps at EyeTech were very helpful and took more quality time in working through the calibration process and some initial target practice with me.  The default acceptance time is 1.3 sec, so you may feel that you want to go faster.  All of us should consider that independent time with equipment is needed to collect data in a systematic way to feel confident about how our “eyes” perform with the technology to make reliable comparisons and before using with a client. 

 

Regardless of the vendor, I heard similar suggestions for trying out an eye gaze system:  1) calibrate; 2) start at the default or slow acceptance time, 3) target practice – start with the fewest and/or largest locations and work your way up to smaller, more numerous locations (I violated this principle with no problem).  I recommend that you have data on yourself, so you know how you perform and what factors influence your performance before trying with a client.  I’m aware professionals on this list know of the work of the Cogain Organization at http://www.cogain.org.  I encourage you to read some of the Cogain papers summarizing the evidence and that support standards in trials and reporting results on eye gaze. 

 

Please, consider sharing the procedures you’ve established to compare eye gaze systems and what clinical data you’re finding most valuable in supporting funding requests.  I’d be happy to respond with additional references and more detailed guidelines on data collection procedures if you email me. 

 

I look forward to gaining from your experiences.

 

Have a great weekend!

 

Katya

 

Katya Hill, PhD, CCC-SLP

Associate Professor

6017 AAC Performance and Testing Lab

University of Pittsburgh

Pittsburgh, PA 15260

Tel: 412-383-6659

Fax: 412-383-6555

xxxxxx@xxxxxxxx


From: xxxxxx@xxxxxxxxxxxxxxxxxxxx [xxxxxx@xxxxxxxxxxxxxxxxxxxx] On Behalf Of Alisa Brownlee [xxxxxx@xxxxxxxxxxxxxxxxxxx]
Sent: Wednesday, January 28, 2009 12:50 PM
To: AT National Listserv
Subject: [Assistive Technology] more new pproducts

The ATIA (Assisitve Technology Industry Association) Convention is this week in Orlando.  Often, new products are launched at this conference and this year is no exception.  I sent an earlier message at Prentke Romich announcing the new eye gaze system, the ECOpoint, and now Tobii Technology is announcing new products including the Lightwriter SL40.   I am sending this info so we can all try keep up to date with new products, especially when new devices are being introduced to the market.

Tobii launches ten new AAC products

Boston, January 28, 2009 – Today Tobii Technology AB announced its U.S. launch of ten new assistive and augmentative communication (AAC) products that give individuals with communication disabilities a voice and a way to live more fulfilled, integrated and independent lives.

Tobii ATI is releasing two brand new alternative and augmentative communication (AAC) devices - the Tobii C8 and the Tobii C12 - that help individuals with speech impairments communicate by converting text and symbols into speech.

 “The launch of the Tobii C8 and C12 is a proud moment for us. We have taken what we have learned from our successful Mercury, Mini Mercury and MyTobii P10 products and paired that with the market’s requests for a lighter, more reliable and energy efficient device. The result is a communication platform that will help many remarkable individuals to live more fulfilled, integrated and independent lives,” says Jane Walerud, Business Area Manager of Assistive Technologies at Swedish Tobii Technology AB.

Both models are modular, meaning that they can be tuned to the changing needs of the user in terms of software applications and input methods.

Tobii C8 turns text and symbols into speech and more…

The Tobii C8 is an eight-inch device that generates speech from text and symbols. It can be operated by the built-in touch screen, a switch, a head mouse, as well as a regular keyboard and mouse. Integral infrared control also makes it possible to control a stereo or TV, switch lights on and off, and more.

Tobii C12 adds a larger screen and eye-control

The Tobii C12 is a twelve-inch device suitable for individuals who need larger symbols or more screen space. It supports the same input methods as the smaller Tobii C8, but also has an additional module that supports eye control, the Tobii CEye. This makes it an ideal solution for those who can’t use a regular keyboard.

Launched together with seven other AAC products

The Tobii C8 and C12 will be launched today at ATIA, Orlando. So will the following products:

·         Tobii CEye – an optional eye control module

·         Lightwriter® SL40, a text-to-speech device

·         A new version of Tobii Communicator 4.3

·         The brand new SymbolMate for creating paper based communication pages

·         Sono Key – A user interface for Tobii Communicator that provides streamlined, comprehensive and easy access to the communication device.

·         Sono Scribe – Sono Scribe enhances typing speeds through the use of word and next word prediction as well as smart phrasing.

·         Sono Lexis – A powerful, modular system of symbol vocabulary page sets that enable users without literacy skills to build their own sentences word by word.

·         Our colleague in Assistive Technology, Prentke-Romich Inc. is launching the ECOpoint, which is our Embedded Systems new eye tracker.

 

Alisa Brownlee, ATP

Clinical Manager, Assistive Technology Services

ALS (Lou Gehrig's Disease) Association, Greater Philadelphia Chapter

 

Assistive Technology Consultant, ALS Association National Office

 

Direct Phone: 215-631-1877