For anyone who’s ever had to call a large company about seemingly anything, you are almost certainly highly aware of automated phone tree systems. Rather than hiring a real person to answer and direct phone calls, a computerized system presents a menu of options to direct the caller to an appropriate department or individual that is equipped to address their inquiry. And yes, these systems certainly have become increasingly sophisticated. What was once a simple “Press ‘1’ for sales, press ‘2’ for billing, etc.” has developed to the point that you can state a few words about your call, and the computer will try to process and route your call.
When it works, it can be nice. But more often than not, it seems, adapting your question to fit the categories that the phone robot offers can be frustrating, time-consuming, and you may end up transferred incorrectly multiple times. Generally, on the phone it is usually fairly obvious when you’re speaking to a robot, and my goal is always to get out of the automated phone tree as quickly as possible and speak to a real human. Especially when you’ve got a highly specific question that doesn’t fit into their “normal” categories, speaking with a real person is much more effective than arguing with the robot—something that always feels ridiculous before, during, and after the moment.
Of course, the issue of phone robots is even more complicated due to recent advancements in voice assistant technology. Google demonstrated a new version of its digital assistant that is able to place its own phone calls, speaking with actual human workers, and even emulating human speech patterns. The person on the other end of the line appeared to have no suspicion at all that they were speaking to a computer.
So identifying and conversing with robots via the phone is difficult enough. But when you’re encountering them in the text-only online spaces such as Twitter, Reddit, Facebook, or even email, the task becomes virtually impossible. While some bots are readily apparent (some accounts on many platforms fully admit that they are automated, and not operated by a real person), there are likely countless accounts that are dutifully operated by computer algorithms, a fact that is never revealed to the human users it interacts with. Twitter recently purged over one million “fake” accounts—so many in fact, that investors were seriously concerned about how much value that the company would lose.
Removing such bot accounts from the service is likely in Twitter’s best interest, despite the backlash that it received from many groups. Restricting the use of the platform to actual human users (as much as possible, at least) can serve to limit the spread of misinformation, and limits the overall influence that a single individual or organization is able to have. However, there is an interesting irony between Twitter’s crackdown on bot accounts and a recent interaction that I had with the Twitter support service.
I recently purchased two Yubikey USB security keys from Yubico in an effort to start to strengthen my online security measures. Two-factor authentication helps protect online accounts by adding an additional layer of security. However, SMS-based verification methods are still somewhat vulnerable due to SIM-spoofing hacking techniques, so physical authentication methods are generally considered more secure.
For supported services, using the YubiKey is incredibly simple. After entering my password, I’m prompted to insert my physical key and tap the button. The device then sends a pair of public and private encryption keys, which are then verified with the service’s own pair of keys. It’s super simple, and I have been working on enabling this two-factor authentication on every service I use that supports it.
I use a MacBook Pro, the most recent model that contains only USB Type C ports. However, I also often use University computers and other systems that typically only have USB Type A ports. Rather than constantly carry an adapter around with me, I chose to purchase two separate YubiKey devices. One of them, the YubiKey 4C nano, fits the USB Type C port on my laptop and is highly compact. The other, the YubiKey NEO, has a USB Type A connector, as well as supports NFC.
For most services, adding multiple keys to my account isn’t a problem, and I was up and running in a matter of minutes. Unfortunately, for Twitter, adding more than one USB security key isn’t something that appears to be possible. This is where my trouble with dealing with the Twitter bots began.
I had one simple question. Is it possible to set up multiple USB security keys with my Twitter accounts? If so, how do I do that? (Okay, that’s two questions…)
Unfortunately, Twitter really doesn’t make it easy to get in touch with them about support questions. They have a large FAQ in their support section, covering topics such as general log-in problems, forgotten passwords, and the such. But as far as two-factor authentication, there is very little support documentation. Sure, there are instructions for enabling and disabling Twitter login verification, but no mention of multiple keys whatsoever. Fair enough; I’m sure the vast majority of Twitter users are trying to set up a Yubikey, let alone multiple devices.
However, trying to get in touch with Twitter Support via direct message (which is supposedly their preferred method that they direct you toward) turned out to be a fruitless endeavor.
So I turned to the next best option, submitting an actual support ticket and corresponding via email. Again, I found myself at the whim of the robots and Twitter’s automated systems. To help categorize their support tickets, they required users to submit the category that their issue is related to. However, the closest category to my question was “Log In Issues.” Which, while technically true, doesn’t necessarily fully describe the nature of my question. Nonetheless, I tried to describe my problem as best I could, and sent my email out into the great unknown.
And, judging by the (seemingly) automated email I received, Twitter Support’s system was never really built to handle my type of question.
Of course, their response really didn’t address my problem, so I wrote another email. I was slightly concerned at this point, because it seemed that Twitter Support had lumped my question among other generic “account access” problems. But I was still able to log in and access my account. I just wanted to find out about adding an additional Yubikey.
Their next responses were equally unhelpful.
During this correspondence that seemed to be headed nowhere, I found myself wondering if I needed to rethink the manner in which I was writing. I was (and still am) fairly confident that my emails at no point had been read by an actual human being. In each message, I specifically wrote that I wasn’t having problems logging in, and merely had a question about two-factor authentication. But despite this, Twitter Support emails continued to include information about regaining access to my account, or working around two-factor authentication. It seemed that the algorithm parsing email to Twitter Support simply had no programming to dictate how to handle my request, and defaulted to its next best option.
Eventually, Twitter even reached the point that it assumed I wanted to disable two-factor authentication!
The fact that Twitter was willing and able to entirely disable two-factor authentication after its algorithms incorrectly parsed my emails is a massive security oversight. At no point in this correspondence was my identity verified, nor was I prompted to even enter my password. Access to my email alone was enough to somehow convince their support bot to disable some of my account’s security. And if I’m misinterpreting all of this and was actually corresponding with real people, I think it’s even more of a security concern.
But those security concerns aside, the fact that we have to adapt our speech and behavior to accommodate interacting with computer systems is especially striking. We often think about advancing tech in the context of computers being able to more easily understand human speech (such as advancements in Siri/Alexa/Google Assistant and their ability to have more natural conversations). However, it seems to be overlooked all to often that such technology is also changing our behaviors at the same time! We have to reword requests and conversations to match the format that the bots are programmed to expect. Even the “natural conversation” form of voice assistants have their limits. Consider, for instance, what has increasingly become a commonplace conversation:
“Hey Siri, what will the weather be tomo- Oh wait. Shit. Hey Siri. <awkward pause> What will the weather be tomorrow?”
Despite this clunkiness, more and more people are willing to talk to their phones, their writs, their speakers. Having to adjust our language to talk with (not to, but with) computers is the new normal. I’m not sure if it’s a good or a bad thing. But just noting that it certainly is a thing.