Skip to content

AIC Problems and (Some) Solutions

(AIC stands for Augmented Interspecies Communication, which is a fancy but quick way of referring to ‘talking pet buttons.’)

Even before our Neck Cone Setback, our button project has not always been smooth sailing. In fact, there have been several significant difficulties right from the beginning … and sometimes the solution to one problem just created another.

Problem One: Garbled Words

With the first buttons I bought — the FluentPet “Speak Up” style, which are by all reports the clearest available option at their price point — Jak and I still had trouble understanding the recordings. For me the problem is auditory processing disorder, for him tinnitus. Additionally, I spend a good chunk of most days wearing my over-ear noise-canceling headphones, both to block out painful external sounds (autism/spd) and because listening to podcasts keeps me on task while doing household chores that aren’t brain-engaging (adhd). But under the headphones, I sometimes wouldn’t even know a button had been pressed, much less be able to tell which one. Our ignoring or misinterpreting communication was bad for Tashi’s learning and emotional state, but not having noise-cancelling and podcasts was bad for my emotional state and functionality. Conundrum.

Problem Two: Identifying Buttons

The little one-inch round button labels that come with the FluentPet kits are cute, but we can’t see them from our respective eye-heights of five and six feet. I am pathologically myopic and my best-corrected vision is not great (last I checked, I was legally blind in one eye and 20/100 in the other); Jak has had multiple retinal detachments, leaving him with distortions and floaters. Both of us have presbyopia, which incidentally has a much greater impact on daily life than I realized before it began to affect me personally.

Not only have I been the one designing the board, I use it in modelling a lot more often than Jak does, so I do end up eventually memorizing the placement of most of the buttons. But I mess up sometimes, and it sure would be nice if I could visually identify the words when needed. The fact that Jak can’t see the words has contributed to his reluctance to even try modelling.

Problem Three: Unintentional Words

I have come to envy people whose button learners, of whatever species, are consistently precise and intentional about which words they activate. Tashi is … not that. He has no qualms about tromping right across the board firing off all the words, bouncing his favorite rubber ball off buttons, and assorted other manifestations of chaos muppetry.

For an example of what this looks like, see this video of Mary Robinette’s young dog Guppy. Now imagine that you aren’t standing in the room when this happens, but are around the corner and down the hall. I kept having to leap up from my desk every few minutes and run into the living room in hopes of guessing from subsequent positioning which pet had pressed words, and how. Lying down on buttons? Lying near buttons but nosing them? Biting at toy lying on top of buttons? Chasing other animal across buttons? Or actual intentional word choices?

The one difference between Tashi and Guppy is that Tashi was nose-booping more than half of his presses, and those — when I was there to see them — were obviously and unquestionably intentional. But also he was — in the period immediately preceding The Cone — regularly triggering between one and two hundred total words a day.

It was a lot, and not always being able to tell the accidental from the purposeful made everything harder.

Problem Four: Floor Space

Jak has been remarkably accepting of my new hobby, especially considering that as Tashi progressed and my communication ambitions swelled, so did the portion of the living room floor that I’d coopted for the soundboard. In other words, he only groused a little bit, but I knew that he was far from thrilled … and also that, unless Tashi reached some kind of learning plateau in the near future, it was only going to get more irritating.

Our house is not large. By late December, I had twenty tiles on the floor, and 63 buttons. My working plan, utilizing every possible amount of ‘available’ floorspace, maxed out with thirty-one tiles arranged roughly in an open-ended oval over 1.5m wide and nearly 1.9 meters long … and even so, it wasn’t truly enough space for a twenty-kilogram chaos muppet. Watching how he interacted with the board, I was starting to fear that I might have to pull back and limit myself to between sixteen and eighteen tiles, which meant choosing a subset of less than 100 words, instead of the 160-180 I was hoping for.

Problem Five: Buggy Equipment

Once I knew we were game on with Tashi, I began trying to solve Problem One by getting more and better equipment. Getting dedicated cameras on the board helped quite a bit, although there’s a lag of a minute or two before captured footage is available for review in the camera app, and then finding the right spot and watching through it a time or two takes another couple of minutes, so the cameras are more useful as after-the-fact comprehension aids, not so much in the moment when you’re lost and your pet is staring at you expectantly.

I expected to solve both problem of in-the-moment word awareness, and the problem of sound clarity, by switching to the FluentPet electronic wireless Connect system. Instead of packing a microphone and speaker into each little button, Connect uses your phone or tablet microphone to record, and stores the words in a separate, larger and louder speaker. They also advertise that you’ll get a ‘text message’ (actually an app notification) automatically logging every button press. This was perfect, I thought, for letting me continue to use my headphones without missing any communication attempts.

The sound of the Connect system is truly fantastic — unless we’re under headphones, or some other significant noise is happening simultaneously, we can now both hear and identify the words from anywhere in our small house, despite our audio-related disabilities. No more garble. (I can only assume that the increased clarity makes it easier for animals to distinguish between similar words as well.)

Where the Connect fails to deliver on its promise is with logging and notification. Soon after switching to Connect, I noticed that occasionally words I was present for (or pressed myself) never sent out notifications or showed up in the system log.

But it was only after I got dedicated cameras up that I started to realize how pervasive a problem this really was. The cameras were catching entire interactions, including audio, that I — listening to podcasts while I worked, or napping with a white noise generator, or whatever — hadn’t even known were happening, because the Connect system was skipping them.

I’d seen enough videos of pets using Connect buttons to know that once in a while they inexplicably fail to sound — they’ll light up, which is a nice feature (and helps you figure out later from camera footage what words your learner intended), but the transmission fails between the button and the speaker. I saw that happening in my system also. But what I wasn’t expecting was the very large number of button presses that did trigger the speaker but failed somewhere between there and FluentPet’s servers, or between their servers and my app.

Remember, I’ve got well over a hundred words coming in every day that I knew about; not only did I need help tracking for Tashi’s sake, but I was also trying to get a sustainable system working to collect reliable data over the long-term and contribute to the current scientific research, something that is very important to me.

After some back and forth with FluentPet support that didn’t really get me anywhere, I sat down one day and scanned through all of the camera footage from the day before (December 9, two days BC, Before Cone), assessing every single button press for intentionality and manually adding the ones I judged to be non-accidental into the app log.

It took me over four solid hours. The soundboard is in a high-traffic area (on purpose and for good reason), so I had to check every time the camera marked motion detection. Here’s what I told the engineer at FluentPet:

On that one day, Tashi (my active learner, an 11-mo puppy) made 128 presses that I judged from the video to be intentional. I don’t have a count of the unintentional ones because I didn’t bother adding those into the app, but there were dozens. (He sometimes tromps across the soundboard or bounces a ball off the buttons. <sigh> Also one of my cats sometimes activates buttons, but since he doesn’t yet understand that different buttons do different things, I count those all as unintentional.)

I lost count of how many button presses I had to enter into the app by hand because there were just so many — and once they’re in, they look identical to the automatic ones. Perhaps you have the ability to distinguish hand-entered presses on your end and can get a definitive count (although keep in mind that sometimes I had to use the split and merge features to manually insert skipped presses into an existing sequence, so a single event may be a mix of both). I would conservatively estimate around 25-30 intentional presses that activated the speaker but did not make it to the app, which puts it at about 20-25% of the 128 total.

I saw three separate times that one of the buttons completely failed to activate the speaker on an intentional press, which puts it at 2-3% of the time. One of those occasions included two consecutive presses (when the first press didn’t sound, he tried again); both failed, at which point he went directly on to the next button in the (six-word) sequence, which sounded. Another time, a three-word sequence became four: the pattern was 1) succeed 2) fail 3) succeed 4) successful retry of the failed button. The third time he just ignored the failure to sound.

By this time I’d also talked to enough other Connect users to confirm that (contrary to what FP support would have me believe), I was not the only person experiencing this degree of failure-to-log — in fact, several long-term, dedicated button teachers were reporting notification/log failure of between 15% and 40% of presses. Which suggests to me that it’s a widespread problem, because those are exactly the people who have the equipment, dedication, and time to recognize what’s happening. Pre-cameras, I had guessed the percentage of missed notifications to be no more than 5%, not 25%.  Someone who isn’t poring over camera footage daily is unlikely to have an accurate sense of how much data is going astray.

Oh, and I also included details and screenshots of a handful of other major bugs I’d experienced in the functionality of the app itself; I won’t even enumerate those here, but the upshot is that for anyone whose learner presses more than a few words a day, scientific data collection is completely impossible, manual logging is a nightmare, and the notification system is hit-or-miss and completely unreliable. It’s better than nothing, but it’s not at all what I was expecting given the claims FP made for the product and the (very high) price charged.

Unfortunately, there are thus far no other companies or products doing what the Connect system does, poorly or otherwise. Even FluentPet’s self-contained, non-logging buttons — garble and all — are better quality than any of their competitors or imitators. I mean, that’s great for FP’s profitability, but it does mean they have very little financial incentive to hire more people and fix their product. And it’s very limiting and disappointing for those of us who are just trying to do right by our pets (and maybe a little citizen science along the way).

Solutions?

So our garbled words problem was solved by switching to Connect. Equipment failure is unsolvable, at least by me; I work around it as best I can. I spend a majority of my headphone time now with one ear uncovered (fortunately podcasts don’t suffer much from being heard in mono), but auditory processing disorder means that while I can hear that some word has sounded, if there is competing noise (e.g. podcast) I often can’t tell which word it was. So I’m at the mercy of whatever the app manages to show me, or Jak if he happens to have heard it. Occasionally I take the time to go back and look at camera footage, but I don’t spend hours a day making sure I’ve got everything — I simply can’t.

That leaves identifying buttons, unintentional presses, and floor space, and on January 8 I implemented what I hoped would improve all three things: I moved Tashi’s soundboard to the wall.

(It’s been a busy almost-two weeks, so I’ve fallen behind again; the next installment is half-written and I hope to finish tomorrow.)

 

Published inDisabilityNeurodivergencePets

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *